docker image of sonarqube is not running with mysql db configuration - docker

I am trying to run docker image of sonarqube with mysql db by below dockercommand:
sudo docker run -d --name hg-sonarqube \
-p 9000:9000 \
-e SONARQUBE_JDBC_USERNAME='sonar' \
-e SONARQUBE_JDBC_PASSWORD='sonar' \
-e SONARQUBE_JDBC_URL='jdbc:mysql://localhost:3306/sonar?useUnicode=true&characterEncoding=utf8&rewriteBatchedStatements=true&useConfigs=maxPerformance' \
sonarqube
But container is not running due to error:
2016.12.28 11:20:11 INFO web[][o.sonar.db.Database] Create JDBC data source for jdbc:mysql://localhost:3306/sonar?useUnicode=true&characterEncoding=utf8&rewriteBatchedStatements=true&useConfigs=maxPerformance
2016.12.28 11:20:11 ERROR web[][o.a.c.c.C.[.[.[/]] Exception sending context initialized event to listener instance of class org.sonar.server.platform.web.PlatformServletContextListener
java.lang.IllegalStateException: Can not connect to database. Please check connectivity and settings (see the properties prefixed by 'sonar.jdbc.').
at org.sonar.db.DefaultDatabase.checkConnection(DefaultDatabase.java:108)
MySQL service is running and sonar database. I use command to create database and give privileges in Ubuntu-14.04.
echo "GRANT ALL PRIVILEGES ON *.* TO 'root'#'%' IDENTIFIED BY 'welcome123'; flush privileges;" | mysql -u root -pwelcome123
echo "CREATE DATABASE sonar CHARACTER SET utf8 COLLATE utf8_general_ci; CREATE USER 'sonar' IDENTIFIED BY 'sonar';GRANT ALL PRIVILEGES ON sonar.* TO 'sonar'#'%' IDENTIFIED BY 'sonar'; GRANT ALL ON sonar.* TO 'sonar'#'localhost' IDENTIFIED BY 'sonar'; flush privileges;" | mysql -u root -pwelcome123
Full Log file:
2016.12.28 11:19:58 INFO app[][o.s.a.AppFileSystem] Cleaning or creating temp directory /opt/sonarqube/temp
2016.12.28 11:19:58 INFO app[][o.s.p.m.JavaProcessLauncher] Launch process[es]: /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -Djava.awt.headless=true -Xmx1G -Xms256m -Xss256k -Djna.nosys=true -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -Djava.io.tmpdir=/opt/sonarqube/temp -javaagent:/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/management-agent.jar -cp ./lib/common/*:./lib/search/* org.sonar.search.SearchServer /opt/sonarqube/temp/sq-process5713024831851311243properties
2016.12.28 11:19:59 INFO es[][o.s.p.ProcessEntryPoint] Starting es
2016.12.28 11:19:59 INFO es[][o.s.s.EsSettings] Elasticsearch listening on /127.0.0.1:9001
2016.12.28 11:19:59 INFO es[][o.elasticsearch.node] [sonarqube] version[2.3.5], pid[18], build[90f439f/2016-07-27T10:36:52Z]
2016.12.28 11:19:59 INFO es[][o.elasticsearch.node] [sonarqube] initializing ...
2016.12.28 11:19:59 INFO es[][o.e.plugins] [sonarqube] modules [], plugins [], sites []
2016.12.28 11:19:59 INFO es[][o.elasticsearch.env] [sonarqube] using [1] data paths, mounts [[/opt/sonarqube/data (/dev/sda1)]], net usable_space [24.2gb], net total_space [28.8gb], spins? [possibly], types [ext4]
2016.12.28 11:19:59 INFO es[][o.elasticsearch.env] [sonarqube] heap size [1007.3mb], compressed ordinary object pointers [true]
2016.12.28 11:20:03 INFO es[][o.elasticsearch.node] [sonarqube] initialized
2016.12.28 11:20:03 INFO es[][o.elasticsearch.node] [sonarqube] starting ...
2016.12.28 11:20:03 INFO es[][o.e.transport] [sonarqube] publish_address {127.0.0.1:9001}, bound_addresses {127.0.0.1:9001}
2016.12.28 11:20:03 INFO es[][o.e.discovery] [sonarqube] sonarqube/CPgnfx6NTe2aO07d6fR0Bg
2016.12.28 11:20:06 INFO es[][o.e.cluster.service] [sonarqube] new_master {sonarqube}{CPgnfx6NTe2aO07d6fR0Bg}{127.0.0.1}{127.0.0.1:9001}{rack_id=sonarqube, master=true}, reason: zen-disco-join(elected_as_master, [0] joins received)
2016.12.28 11:20:06 INFO es[][o.elasticsearch.node] [sonarqube] started
2016.12.28 11:20:06 INFO es[][o.e.gateway] [sonarqube] recovered [0] indices into cluster_state
2016.12.28 11:20:06 INFO app[][o.s.p.m.Monitor] Process[es] is up
2016.12.28 11:20:06 INFO app[][o.s.p.m.JavaProcessLauncher] Launch process[web]: /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djruby.management.enabled=false -Djruby.compile.invokedynamic=false -Xmx512m -Xms128m -XX:+HeapDumpOnOutOfMemoryError -Djava.security.egd=file:/dev/./urandom -Djava.io.tmpdir=/opt/sonarqube/temp -javaagent:/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/management-agent.jar -cp ./lib/common/*:./lib/server/*:/opt/sonarqube/lib/jdbc/mysql/mysql-connector-java-5.1.39.jar org.sonar.server.app.WebServer /opt/sonarqube/temp/sq-process6242669754365841464properties
2016.12.28 11:20:08 INFO web[][o.s.p.ProcessEntryPoint] Starting web
2016.12.28 11:20:08 INFO web[][o.s.s.a.TomcatContexts] Webapp directory: /opt/sonarqube/web
2016.12.28 11:20:08 INFO web[][o.a.c.h.Http11NioProtocol] Initializing ProtocolHandler ["http-nio-0.0.0.0-9000"]
2016.12.28 11:20:08 INFO web[][o.a.t.u.n.NioSelectorPool] Using a shared selector for servlet write/read
2016.12.28 11:20:09 INFO web[][o.e.plugins] [Bushwacker] modules [], plugins [], sites []
2016.12.28 11:20:11 INFO web[][o.s.s.e.EsClientProvider] Connected to local Elasticsearch: [127.0.0.1:9001]
2016.12.28 11:20:11 INFO web[][o.s.s.p.LogServerVersion] SonarQube Server / 6.2 / 4a28f29f95254b58f3cf0a0871bc632e998403f5
2016.12.28 11:20:11 INFO web[][o.sonar.db.Database] Create JDBC data source for jdbc:mysql://localhost:3306/sonar?useUnicode=true&characterEncoding=utf8&rewriteBatchedStatements=true&useConfigs=maxPerformance
2016.12.28 11:20:11 ERROR web[][o.a.c.c.C.[.[.[/]] Exception sending context initialized event to listener instance of class org.sonar.server.platform.web.PlatformServletContextListener
java.lang.IllegalStateException: Can not connect to database. Please check connectivity and settings (see the properties prefixed by 'sonar.jdbc.').
at org.sonar.db.DefaultDatabase.checkConnection(DefaultDatabase.java:108)
at org.sonar.db.DefaultDatabase.start(DefaultDatabase.java:75)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.picocontainer.lifecycle.ReflectionLifecycleStrategy.invokeMethod(ReflectionLifecycleStrategy.java:110)
at org.picocontainer.lifecycle.ReflectionLifecycleStrategy.start(ReflectionLifecycleStrategy.java:89)
at org.sonar.core.platform.ComponentContainer$1.start(ComponentContainer.java:320)
at org.picocontainer.injectors.AbstractInjectionFactory$LifecycleAdapter.start(AbstractInjectionFactory.java:84)
at org.picocontainer.behaviors.AbstractBehavior.start(AbstractBehavior.java:169)
at org.picocontainer.behaviors.Stored$RealComponentLifecycle.start(Stored.java:132)
at org.picocontainer.behaviors.Stored.start(Stored.java:110)
at org.picocontainer.DefaultPicoContainer.potentiallyStartAdapter(DefaultPicoContainer.java:1016)
at org.picocontainer.DefaultPicoContainer.startAdapters(DefaultPicoContainer.java:1009)
at org.picocontainer.DefaultPicoContainer.start(DefaultPicoContainer.java:767)
at org.sonar.core.platform.ComponentContainer.startComponents(ComponentContainer.java:141)
at org.sonar.server.platform.platformlevel.PlatformLevel.start(PlatformLevel.java:88)
at org.sonar.server.platform.Platform.start(Platform.java:216)
at org.sonar.server.platform.Platform.startLevel1Container(Platform.java:175)
at org.sonar.server.platform.Platform.init(Platform.java:90)
at org.sonar.server.platform.web.PlatformServletContextListener.contextInitialized(PlatformServletContextListener.java:44)
at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4812)
at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5255)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:147)
at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1408)
at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1398)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.commons.dbcp.SQLNestedException: Cannot create PoolableConnectionFactory (Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.)
at org.apache.commons.dbcp.BasicDataSource.createPoolableConnectionFactory(BasicDataSource.java:1549)
at org.apache.commons.dbcp.BasicDataSource.createDataSource(BasicDataSource.java:1388)
at org.apache.commons.dbcp.BasicDataSource.getConnection(BasicDataSource.java:1044)
at org.sonar.db.profiling.NullConnectionInterceptor.getConnection(NullConnectionInterceptor.java:31)
at org.sonar.db.profiling.ProfiledDataSource.getConnection(ProfiledDataSource.java:323)
at org.sonar.db.DefaultDatabase.checkConnection(DefaultDatabase.java:106)
... 30 common frames omitted
Caused by: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)

Might be helpful for those who still facing this issue
You might be running mysql in another container. You can use below docker-compose file to make it in the same network
# Use sonar/sonar as user/password credentials
version: '3.1'
services:
sonarqube:
image: sonarqube:5.1.1
networks:
- sonarqube-network
ports:
- "9000:9000"
- "3306:3306"
environment:
- SONARQUBE_JDBC_USERNAME=sonar
- SONARQUBE_JDBC_PASSWORD=sonar
- SONARQUBE_JDBC_URL=jdbc:mysql://localhost:3306/sonar?useUnicode=true&characterEncoding=utf8&rewriteBatchedStatements=true
db:
image: mysql
networks:
- sonarqube-network
environment:
- MYSQL_ROOT_PASSWORD=sonar
- MYSQL_DATABASE=sonar
- MYSQL_USER=sonar
- MYSQL_PASSWORD=sonar
networks:
sonarqube-network:
Save the file as docker-compose.yml and run docker-compose up
Please note this entry,
- 3306:3306
After that try with,
mysql -u sonar -h localhost -p
to connect to MySQL

Related

Failed to connect to spark-master:7077

I am trying to deploy my spark application on Kubernetes. I followed the below steps:
Installed spark-kubernetes-operator:
helm repo add spark-operator https://googlecloudplatform.github.io/spark-on-k8s-operator
helm install gcp-spark-operator spark-operator/spark-operator
Created a spark-app.py
from pyspark.sql.functions import *
from pyspark.sql import *
from pyspark import SparkContext
from pyspark.sql import SparkSession
from pyspark.sql.types import StructType,StructField, StringType, IntegerType
if __name__ == "__main__":
spark = SparkSession.builder.appName('spark-on-kubernetes-test').getOrCreate()
data2 = [("James","","Smith","36636","M",3000),
("Michael","Rose","","40288","M",4000),
("Robert","","Williams","42114","M",4000),
("Maria","Anne","Jones","39192","F",4000),
("Jen","Mary","Brown","","F",-1)
]
schema = StructType([ \
StructField("firstname",StringType(),True), \
StructField("middlename",StringType(),True), \
StructField("lastname",StringType(),True), \
StructField("id", StringType(), True), \
StructField("gender", StringType(), True), \
StructField("salary", IntegerType(), True) \
])
df = spark.createDataFrame(data=data2,schema=schema)
df.printSchema()
df.show(truncate=False)
print("program is completed !")
Then I created the new image with my application:
FROM bitnami/spark
USER root
WORKDIR /app
COPY requirements.txt .
RUN pip3 install -r requirements.txt
COPY spark-app.py .<
Then I created the spark-application.yaml file:
apiVersion: "sparkoperator.k8s.io/v1beta2"
kind: SparkApplication
metadata:
name: pyspark-app
namespace: default
spec:
type: Python
mode: cluster
image: "test/spark-k8s-app:1.0"
imagePullPolicy: Always
mainApplicationFile: local:///app/spark-app.py
sparkVersion: 3.3.0
restartPolicy:
type: OnFailure
onFailureRetries: 3
onFailureRetryInterval: 10
onSubmissionFailureRetries: 5
onSubmissionFailureRetryInterval: 20
driver:
cores: 1
coreLimit: "1200m"
memory: "512m"
serviceAccount: spark
labels:
version: 3.3.0
volumeMounts:
- name: "test-volume"
mountPath: "/tmp"
executor:
cores: 1
instances: 1
memory: "512m"
labels:
version: 3.3.0
volumeMounts:
- name: "test-volume"
mountPath: "/tmp"
But when I tried to deploy the yaml file, I am getting the below error:
$ kubectl logs pyspark-app-driver
←[38;5;6m ←[38;5;5m13:04:01.12 ←[0m
←[38;5;6m ←[38;5;5m13:04:01.12 ←[0m←[1mWelcome to the Bitnami spark container←[0m
←[38;5;6m ←[38;5;5m13:04:01.12 ←[0mSubscribe to project updates by watching ←[1mhttps://github.com/bitnami/containers←[0m
←[38;5;6m ←[38;5;5m13:04:01.12 ←[0mSubmit issues and feature requests at ←[1mhttps://github.com/bitnami/containers/issues←[0m
←[38;5;6m ←[38;5;5m13:04:01.12 ←[0m
22/09/24 13:04:04 INFO SparkContext: Running Spark version 3.3.0
22/09/24 13:04:04 INFO ResourceUtils: ==============================================================
22/09/24 13:04:04 INFO ResourceUtils: No custom resources configured for spark.driver.
22/09/24 13:04:04 INFO ResourceUtils: ==============================================================
22/09/24 13:04:04 INFO SparkContext: Submitted application: ml_framework
22/09/24 13:04:04 INFO ResourceProfile: Default ResourceProfile created, executor resources: Map(cores -> name: cores, amount: 1, script: , vendor: , memory -> name: memory, amount: 512, script: , vendor: , offHeap -> name: offHeap, amount: 0, script: , vendor: ), task resources: Map(cpus -> name: cpus, amount: 1.0)
22/09/24 13:04:04 INFO ResourceProfile: Limiting resource is cpus at 1 tasks per executor
22/09/24 13:04:04 INFO ResourceProfileManager: Added ResourceProfile id: 0
22/09/24 13:04:04 INFO SecurityManager: Changing view acls to: root
22/09/24 13:04:04 INFO SecurityManager: Changing modify acls to: root
22/09/24 13:04:04 INFO SecurityManager: Changing view acls groups to:
22/09/24 13:04:04 INFO SecurityManager: Changing modify acls groups to:
22/09/24 13:04:04 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users
with view permissions: Set(root); groups with view permissions: Set(); users with modify permissions: Set(root); groups with modify permissions: Set()
22/09/24 13:04:05 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
22/09/24 13:04:05 INFO Utils: Successfully started service 'sparkDriver' on port 7078.
22/09/24 13:04:05 INFO SparkEnv: Registering MapOutputTracker
22/09/24 13:04:05 INFO SparkEnv: Registering BlockManagerMaster
22/09/24 13:04:05 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
22/09/24 13:04:05 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
22/09/24 13:04:05 INFO SparkEnv: Registering BlockManagerMasterHeartbeat
22/09/24 13:04:05 INFO DiskBlockManager: Created local directory at /var/data/spark-019ba05b-dba8-4350-a281-ffa35b54d840/blockmgr-20d5c478-8e93-42f6-85a9-9ed070f50b2b
22/09/24 13:04:05 INFO MemoryStore: MemoryStore started with capacity 117.0 MiB
22/09/24 13:04:05 INFO SparkEnv: Registering OutputCommitCoordinator
22/09/24 13:04:06 INFO Utils: Successfully started service 'SparkUI' on port 4040.
22/09/24 13:04:06 INFO StandaloneAppClient$ClientEndpoint: Connecting to master spark://spark-master:7077...
22/09/24 13:04:10 WARN TransportClientFactory: DNS resolution failed for spark-master:7077 took 4005 ms
22/09/24 13:04:10 WARN StandaloneAppClient$ClientEndpoint: Failed to connect to master spark-master:7077
org.apache.spark.SparkException: Exception thrown in awaitResult:
at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:301)
at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:75)
at org.apache.spark.rpc.RpcEnv.setupEndpointRefByURI(RpcEnv.scala:102)
at org.apache.spark.rpc.RpcEnv.setupEndpointRef(RpcEnv.scala:110)
at org.apache.spark.deploy.client.StandaloneAppClient$ClientEndpoint$$anon$1.run(StandaloneAppClient.scala:107)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
Caused by: java.io.IOException: Failed to connect to spark-master:7077
at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:288)
at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:218)
at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:230)
at org.apache.spark.rpc.netty.NettyRpcEnv.createClient(NettyRpcEnv.scala:204)
at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:202)
at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:198)
... 4 more
Caused by: java.net.UnknownHostException: spark-master
at java.net.InetAddress.getAllByName0(InetAddress.java:1287)
at java.net.InetAddress.getAllByName(InetAddress.java:1199)
at java.net.InetAddress.getAllByName(InetAddress.java:1127)
at java.net.InetAddress.getByName(InetAddress.java:1077)
at io.netty.util.internal.SocketUtils$8.run(SocketUtils.java:156)
at io.netty.util.internal.SocketUtils$8.run(SocketUtils.java:153)
at java.security.AccessController.doPrivileged(Native Method)
at io.netty.util.internal.SocketUtils.addressByName(SocketUtils.java:153)
at io.netty.resolver.DefaultNameResolver.doResolve(DefaultNameResolver.java:41)
at io.netty.resolver.SimpleNameResolver.resolve(SimpleNameResolver.java:61)
at io.netty.resolver.SimpleNameResolver.resolve(SimpleNameResolver.java:53)
at io.netty.resolver.InetSocketAddressResolver.doResolve(InetSocketAddressResolver.java:55)
at io.netty.resolver.InetSocketAddressResolver.doResolve(InetSocketAddressResolver.java:31)
at io.netty.resolver.AbstractAddressResolver.resolve(AbstractAddressResolver.java:106)
at io.netty.bootstrap.Bootstrap.doResolveAndConnect0(Bootstrap.java:206)
at io.netty.bootstrap.Bootstrap.access$000(Bootstrap.java:46)
at io.netty.bootstrap.Bootstrap$1.operationComplete(Bootstrap.java:180)
at io.netty.bootstrap.Bootstrap$1.operationComplete(Bootstrap.java:166)
at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:578)
at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:552)
at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:491)
at io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:616)
at io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:605)
at io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:104)
at io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:84)
at io.netty.channel.AbstractChannel$AbstractUnsafe.safeSetSuccess(AbstractChannel.java:990)
at io.netty.channel.AbstractChannel$AbstractUnsafe.register0(AbstractChannel.java:516)
at io.netty.channel.AbstractChannel$AbstractUnsafe.access$200(AbstractChannel.java:429)
at io.netty.channel.AbstractChannel$AbstractUnsafe$1.run(AbstractChannel.java:486)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:469)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:503)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
... 1 more
22/09/24 13:04:26 INFO StandaloneAppClient$ClientEndpoint: Connecting to master spark://spark-master:7077...
22/09/24 13:04:30 WARN TransportClientFactory: DNS resolution failed for spark-master:7077 took 4006 ms
22/09/24 13:04:30 WARN StandaloneAppClient$ClientEndpoint: Failed to connect to master spark-master:7077
org.apache.spark.SparkException: Exception thrown in awaitResult:
at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:301)
at org.apache.spark.rpc.RpcTimeout.awaitResult(RpcTimeout.scala:75)
at org.apache.spark.rpc.RpcEnv.setupEndpointRefByURI(RpcEnv.scala:102)
at org.apache.spark.rpc.RpcEnv.setupEndpointRef(RpcEnv.scala:110)
at org.apache.spark.deploy.client.StandaloneAppClient$ClientEndpoint$$anon$1.run(StandaloneAppClient.scala:107)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
Caused by: java.io.IOException: Failed to connect to spark-master:7077
at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:288)
at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:218)
at org.apache.spark.network.client.TransportClientFactory.createClient(TransportClientFactory.java:230)
at org.apache.spark.rpc.netty.NettyRpcEnv.createClient(NettyRpcEnv.scala:204)
at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:202)
at org.apache.spark.rpc.netty.Outbox$$anon$1.call(Outbox.scala:198)
... 4 more
Caused by: java.net.UnknownHostException: spark-master
at java.net.InetAddress.getAllByName0(InetAddress.java:1287)
at java.net.InetAddress.getAllByName(InetAddress.java:1199)
at java.net.InetAddress.getAllByName(InetAddress.java:1127)
at java.net.InetAddress.getByName(InetAddress.java:1077)
at io.netty.util.internal.SocketUtils$8.run(SocketUtils.java:156)
at io.netty.util.internal.SocketUtils$8.run(SocketUtils.java:153)
at java.security.AccessController.doPrivileged(Native Method)
at io.netty.util.internal.SocketUtils.addressByName(SocketUtils.java:153)
at io.netty.resolver.DefaultNameResolver.doResolve(DefaultNameResolver.java:41)
at io.netty.resolver.SimpleNameResolver.resolve(SimpleNameResolver.java:61)
at io.netty.resolver.SimpleNameResolver.resolve(SimpleNameResolver.java:53)
at io.netty.resolver.InetSocketAddressResolver.doResolve(InetSocketAddressResolver.java:55)
at io.netty.resolver.InetSocketAddressResolver.doResolve(InetSocketAddressResolver.java:31)
at io.netty.resolver.AbstractAddressResolver.resolve(AbstractAddressResolver.java:106)
at io.netty.bootstrap.Bootstrap.doResolveAndConnect0(Bootstrap.java:206)
at io.netty.bootstrap.Bootstrap.access$000(Bootstrap.java:46)
at io.netty.bootstrap.Bootstrap$1.operationComplete(Bootstrap.java:180)
at io.netty.bootstrap.Bootstrap$1.operationComplete(Bootstrap.java:166)
at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:578)
at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:552)
at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:491)
at io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:616)
at io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:605)
at io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:104)
at io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:84)
at io.netty.channel.AbstractChannel$AbstractUnsafe.safeSetSuccess(AbstractChannel.java:990)
at io.netty.channel.AbstractChannel$AbstractUnsafe.register0(AbstractChannel.java:516)
at io.netty.channel.AbstractChannel$AbstractUnsafe.access$200(AbstractChannel.java:429)
at io.netty.channel.AbstractChannel$AbstractUnsafe$1.run(AbstractChannel.java:486)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:469)9)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:503)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
... 1 more
How can I resolve this?

Dockerfile with Tomcat catalina.sh run does not run

I can build the Dockerfile. When I do docker run -it path-to-image/tomcat9:latest and check the logs, there isn't a catalina.out and the run fail with getting /bin/sh: ["catalina.sh",: command not found.
Here is my Dockerfile
FROM gitlab-registry.gs.mil/gets-development/docker/openjdk11
USER root
# Copy Tomcat and start
ADD imageFiles/apache-tomcat-9.0.65.tar.gz /usr/local/
RUN mv /usr/local/apache-tomcat-9.0.65/ /usr/local/tomcat
ENV WORKPATH /usr/local
WORKDIR $WORKPATH
ENV CATALINA_HOME /usr/local/tomcat
ENV CATALINA_BASE /usr/local/tomcat
ENV PATH $PATH:$CATALINA_HOME/bin:$CATALINA_HOME/lib
EXPOSE 8080
CMD ["/usr/local/tomcat/bin/catalina.sh", "run"]
Build command:
docker build -t gitlab-registry.gs.mil/gets-development/docker/tomcat9-test .
Start command:
docker run --name tomcatTest -it gitlab-registry.gs.mil/gets-development/docker/tomcat9-test:latest /bin/bash
Trying to connect inside the docker container to the localhost fails
curl: (7) Failed to connect to localhost port 8080: Connection refused
There are no log files
[root#b058163e9605 local]# cd tomcat/logs/
[root#b058163e9605 logs]# ls -als
total 0
0 drwxr-x--- 2 root root 6 Jul 14 12:28 .
0 drwxr-xr-x 9 root root 220 Aug 5 16:17 ..
[root#b058163e9605 logs]#
This is telling me that Tomcat did not start. Starting Tomcat inside the container, tomcat could launch success:
[root#b058163e9605 bin]# ./catalina.sh run
.....
08-Aug-2022 13:12:02.934 INFO [main] org.apache.coyote.AbstractProtocol.init Initializing ProtocolHandler ["http-nio-8080"]
08-Aug-2022 13:12:03.038 INFO [main] org.apache.catalina.startup.Catalina.load Server initialization in [1590] milliseconds
08-Aug-2022 13:12:03.204 INFO [main] org.apache.catalina.core.StandardService.startInternal Starting service [Catalina]
08-Aug-2022 13:12:03.205 INFO [main] org.apache.catalina.core.StandardEngine.startInternal Starting Servlet engine: [Apache Tomcat/9.0.65]
08-Aug-2022 13:12:03.224 INFO [main] org.apache.catalina.startup.HostConfig.deployDirectory Deploying web application directory [/usr/local/tomcat/webapps/ROOT]
08-Aug-2022 13:12:03.877 INFO [main] org.apache.catalina.startup.HostConfig.deployDirectory Deployment of web application directory [/usr/local/tomcat/webapps/ROOT] has finished in [652] ms
08-Aug-2022 13:12:03.879 INFO [main] org.apache.catalina.startup.HostConfig.deployDirectory Deploying web application directory [/usr/local/tomcat/webapps/docs]
08-Aug-2022 13:12:03.945 INFO [main] org.apache.catalina.startup.HostConfig.deployDirectory Deployment of web application directory [/usr/local/tomcat/webapps/docs] has finished in [66] ms
08-Aug-2022 13:12:03.947 INFO [main] org.apache.catalina.startup.HostConfig.deployDirectory Deploying web application directory [/usr/local/tomcat/webapps/examples]
08-Aug-2022 13:12:04.559 INFO [main] org.apache.catalina.startup.HostConfig.deployDirectory Deployment of web application directory [/usr/local/tomcat/webapps/examples] has finished in [613] ms
08-Aug-2022 13:12:04.562 INFO [main] org.apache.catalina.startup.HostConfig.deployDirectory Deploying web application directory [/usr/local/tomcat/webapps/host-manager]
08-Aug-2022 13:12:04.626 INFO [main] org.apache.catalina.startup.HostConfig.deployDirectory Deployment of web application directory [/usr/local/tomcat/webapps/host-manager] has finished in [63] ms
08-Aug-2022 13:12:04.626 INFO [main] org.apache.catalina.startup.HostConfig.deployDirectory Deploying web application directory [/usr/local/tomcat/webapps/manager]
08-Aug-2022 13:12:04.717 INFO [main] org.apache.catalina.startup.HostConfig.deployDirectory Deployment of web application directory [/usr/local/tomcat/webapps/manager] has finished in [90] ms
08-Aug-2022 13:12:04.733 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["http-nio-8080"]
08-Aug-2022 13:12:04.767 INFO [main] org.apache.catalina.startup.Catalina.start Server startup in [1728] milliseconds
Last, I check the Docker logs shows me what I did inside the container, but no other information.
Please assist.
Your docker run command does not launch Tomcat but simply bash. Notice the last argument
docker run --name tomcatTest -it gitlab-registry.gs.mil/gets-development/docker/tomcat9-test:latest /bin/bash
change it into
docker run --name tomcatTest gitlab-registry.gs.mil/gets-development/docker/tomcat9-test:latest
If you want a shell to investigate what is going on inside a running container, use
docker exec -it tomcatTest /bin/bash

Jenkins in Docker Git Plugin allow local checkout

This same question was asked but not with jenkins in a Docker container
Jenkins: allow local checkout
I had jenkins running in a docker container and was able to execute pipeline jobs with no problem.
After installing the container recently on a new machine I'm getting:
ERROR: Checkout of Git remote 'file:///usr/src/' aborted because it references a local directory, which may be insecure. You can allow local checkouts anyway by setting the system property 'hudson.plugins.git.GitSCM.ALLOW_LOCAL_CHECKOUT' to true.
ERROR: Maximum checkout retry attempts reached, aborting
Finished: FAILURE
I tried this script in init.d
import jenkins.model.Jenkins
import java.util.logging.LogManager
/* Jenkins home directory */
def jenkinsHome = Jenkins.instance.getRootDir().absolutePath
def logger = LogManager.getLogManager().getLogger("")
/* Replace the Key and value with the values you want to set.*/
/* System.setProperty(key, value) */
System.setProperty("hudson.plugins.git.GitSCM.ALLOW_LOCAL_CHECKOUT", "true")
logger.info("Jenkins Startup Script: set GitSCM.ALLOW_LOCAL_CHECKOUT to true . Script location : ${jenkinsHome}/init.groovy.d ")
I tried this inside the docker container
jenkins#3d5e0ebf919e:/$ java -D"hudson.plugins.git.GitSCM.ALLOW_LOCAL_CHECKOUT=true" -jar /usr/share/jenkins/jenkins.war
Running from: /usr/share/jenkins/jenkins.warns.git.GitSCM.ALLOW_LOCAL_CHECKOUT=true" -jar /usr/share/jenkins/jenkins.war
webroot: EnvVars.masterEnvVars.get("JENKINS_HOME")
2022-05-28 20:26:27.644+0000 [id=1] INFO org.eclipse.jetty.util.log.Log#initialized: Logging initialized #758ms to org.eclipse.jetty.util.log.JavaUtilLog
2022-05-28 20:26:27.744+0000 [id=1] INFO winstone.Logger#logInternal: Beginning extraction from war file
2022-05-28 20:26:27.792+0000 [id=1] WARNING o.e.j.s.handler.ContextHandler#setContextPath: Empty contextPath
2022-05-28 20:26:27.876+0000 [id=1] INFO org.eclipse.jetty.server.Server#doStart: jetty-9.4.43.v20210629; built: 2021-06-30T11:07:22.254Z; git: 526006ecfa3af7f1a27ef3a288e2bef7ea9dd7e8; jvm 11.0.15+10
2022-05-28 20:26:28.162+0000 [id=1] INFO o.e.j.w.StandardDescriptorProcessor#visitServlet: NO JSP Support for /, did not find org.eclipse.jetty.jsp.JettyJspServlet
2022-05-28 20:26:28.204+0000 [id=1] INFO o.e.j.s.s.DefaultSessionIdManager#doStart: DefaultSessionIdManager workerName=node0
2022-05-28 20:26:28.204+0000 [id=1] INFO o.e.j.s.s.DefaultSessionIdManager#doStart: No SessionScavenger set, using defaults
2022-05-28 20:26:28.205+0000 [id=1] INFO o.e.j.server.session.HouseKeeper#startScavenging: node0 Scavenging every 600000ms
2022-05-28 20:26:28.694+0000 [id=1] INFO hudson.WebAppMain#contextInitialized: Jenkins home directory: /var/jenkins_home found at: EnvVars.masterEnvVars.get("JENKINS_HOME")
2022-05-28 20:26:28.869+0000 [id=1] INFO o.e.j.s.handler.ContextHandler#doStart: Started w.#36c54a56{Jenkins v2.332.3,/,file:///var/jenkins_home/war/,AVAILABLE}{/var/jenkins_home/war}
2022-05-28 20:26:28.892+0000 [id=1] INFO o.e.j.server.AbstractConnector#doStop: Stopped ServerConnector#206a70ef{HTTP/1.1, (http/1.1)}{0.0.0.0:8080}
2022-05-28 20:26:28.893+0000 [id=1] INFO o.e.j.server.session.HouseKeeper#stopScavenging: node0 Stopped scavenging
2022-05-28 20:26:28.901+0000 [id=1] INFO hudson.WebAppMain#contextDestroyed: Shutting down a Jenkins instance that was still starting up
java.lang.Throwable: reason
at hudson.WebAppMain.contextDestroyed(WebAppMain.java:386)
at org.eclipse.jetty.server.handler.ContextHandler.callContextDestroyed(ContextHandler.java:1074)
at org.eclipse.jetty.servlet.ServletContextHandler.callContextDestroyed(ServletContextHandler.java:584)
at org.eclipse.jetty.server.handler.ContextHandler.contextDestroyed(ContextHandler.java:1037)
at org.eclipse.jetty.servlet.ServletHandler.doStop(ServletHandler.java:319)
at org.eclipse.jetty.util.component.AbstractLifeCycle.stop(AbstractLifeCycle.java:94)
at org.eclipse.jetty.util.component.ContainerLifeCycle.stop(ContainerLifeCycle.java:180)
at org.eclipse.jetty.util.component.ContainerLifeCycle.doStop(ContainerLifeCycle.java:201)
at org.eclipse.jetty.server.handler.AbstractHandler.doStop(AbstractHandler.java:108)
at org.eclipse.jetty.security.SecurityHandler.doStop(SecurityHandler.java:437)
at org.eclipse.jetty.security.ConstraintSecurityHandler.doStop(ConstraintSecurityHandler.java:423)
at org.eclipse.jetty.util.component.AbstractLifeCycle.stop(AbstractLifeCycle.java:94)
at org.eclipse.jetty.util.component.ContainerLifeCycle.stop(ContainerLifeCycle.java:180)
at org.eclipse.jetty.util.component.ContainerLifeCycle.doStop(ContainerLifeCycle.java:201)
at org.eclipse.jetty.server.handler.AbstractHandler.doStop(AbstractHandler.java:108)
at org.eclipse.jetty.server.session.SessionHandler.doStop(SessionHandler.java:520)
at org.eclipse.jetty.util.component.AbstractLifeCycle.stop(AbstractLifeCycle.java:94)
at org.eclipse.jetty.util.component.ContainerLifeCycle.stop(ContainerLifeCycle.java:180)
at org.eclipse.jetty.util.component.ContainerLifeCycle.doStop(ContainerLifeCycle.java:201)
at org.eclipse.jetty.server.handler.AbstractHandler.doStop(AbstractHandler.java:108)
at org.eclipse.jetty.server.handler.ContextHandler.stopContext(ContextHandler.java:1060)
at org.eclipse.jetty.servlet.ServletContextHandler.stopContext(ServletContextHandler.java:386)
at org.eclipse.jetty.webapp.WebAppContext.stopWebapp(WebAppContext.java:1454)
at org.eclipse.jetty.webapp.WebAppContext.stopContext(WebAppContext.java:1420)
at org.eclipse.jetty.server.handler.ContextHandler.doStop(ContextHandler.java:1114)
at org.eclipse.jetty.servlet.ServletContextHandler.doStop(ServletContextHandler.java:297)
at org.eclipse.jetty.webapp.WebAppContext.doStop(WebAppContext.java:547)
at org.eclipse.jetty.util.component.AbstractLifeCycle.stop(AbstractLifeCycle.java:94)
at org.eclipse.jetty.util.component.ContainerLifeCycle.stop(ContainerLifeCycle.java:180)
at org.eclipse.jetty.util.component.ContainerLifeCycle.doStop(ContainerLifeCycle.java:201)
at org.eclipse.jetty.server.handler.AbstractHandler.doStop(AbstractHandler.java:108)
at org.eclipse.jetty.server.Server.doStop(Server.java:470)
at org.eclipse.jetty.util.component.AbstractLifeCycle.stop(AbstractLifeCycle.java:94)
at winstone.Launcher.shutdown(Launcher.java:318)
at winstone.Launcher.<init>(Launcher.java:205)
at winstone.Launcher.main(Launcher.java:369)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at Main._main(Main.java:304)
at Main.main(Main.java:108)
2022-05-28 20:26:28.905+0000 [id=1] INFO o.e.j.s.handler.ContextHandler#doStop: Stopped w.#36c54a56{Jenkins v2.332.3,/,null,STOPPED}{/var/jenkins_home/war}
Exception in thread "Jenkins initialization thread" java.lang.NoClassDefFoundError: hudson/util/HudsonFailedToLoad
at hudson.WebAppMain$3.run(WebAppMain.java:261)
Caused by: java.lang.ClassNotFoundException: hudson.util.HudsonFailedToLoad
at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:476)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:589)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:522)
at org.eclipse.jetty.webapp.WebAppClassLoader.loadClass(WebAppClassLoader.java:538)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:522)
... 1 more
2022-05-28 20:26:28.908+0000 [id=1] INFO winstone.Logger#logInternal: Jetty shutdown successfully
java.io.IOException: Failed to start Jetty
at winstone.Launcher.<init>(Launcher.java:194)
at winstone.Launcher.main(Launcher.java:369)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at Main._main(Main.java:304)
at Main.main(Main.java:108)
Caused by: java.io.IOException: Failed to bind to 0.0.0.0/0.0.0.0:8080
at org.eclipse.jetty.server.ServerConnector.openAcceptChannel(ServerConnector.java:349)
at org.eclipse.jetty.server.ServerConnector.open(ServerConnector.java:310)
at org.eclipse.jetty.server.AbstractNetworkConnector.doStart(AbstractNetworkConnector.java:80)
at org.eclipse.jetty.server.ServerConnector.doStart(ServerConnector.java:234)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73)
at org.eclipse.jetty.server.Server.doStart(Server.java:401)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73)
at winstone.Launcher.<init>(Launcher.java:192)
... 7 more
Caused by: java.net.BindException: Address already in use
at java.base/sun.nio.ch.Net.bind0(Native Method)
at java.base/sun.nio.ch.Net.bind(Net.java:459)
at java.base/sun.nio.ch.Net.bind(Net.java:448)
at java.base/sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:227)
at java.base/sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:80)
at org.eclipse.jetty.server.ServerConnector.openAcceptChannel(ServerConnector.java:344)
... 14 more
2022-05-28 20:26:28.909+0000 [id=1] SEVERE winstone.Logger#logInternal: Container startup failed
java.net.BindException: Address already in use
at java.base/sun.nio.ch.Net.bind0(Native Method)
at java.base/sun.nio.ch.Net.bind(Net.java:459)
at java.base/sun.nio.ch.Net.bind(Net.java:448)
at java.base/sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:227)
at java.base/sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:80)
at org.eclipse.jetty.server.ServerConnector.openAcceptChannel(ServerConnector.java:344)
Caused: java.io.IOException: Failed to bind to 0.0.0.0/0.0.0.0:8080
at org.eclipse.jetty.server.ServerConnector.openAcceptChannel(ServerConnector.java:349)
at org.eclipse.jetty.server.ServerConnector.open(ServerConnector.java:310)
at org.eclipse.jetty.server.AbstractNetworkConnector.doStart(AbstractNetworkConnector.java:80)
at org.eclipse.jetty.server.ServerConnector.doStart(ServerConnector.java:234)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73)
at org.eclipse.jetty.server.Server.doStart(Server.java:401)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:73)
at winstone.Launcher.<init>(Launcher.java:192)
Caused: java.io.IOException: Failed to start Jetty
at winstone.Launcher.<init>(Launcher.java:194)
at winstone.Launcher.main(Launcher.java:369)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at Main._main(Main.java:304)
at Main.main(Main.java:108)
In jenkins system properties i see hudson.plugins.git.GitSCM.ALLOW_LOCAL_CHECKOUT true
Still the problem persists
Running Jenkins 2.332.3
Dockerfile:
FROM jenkins/jenkins:lts
ARG user
ARG password
ARG git_password
ARG branch
USER jenkins
RUN /usr/local/bin/install-plugins.sh \
cloudbees-folder \
antisamy-markup-formatter \
build-timeout \
credentials-binding \
timestamper \
ws-cleanup \
ant \
gradle \
workflow-aggregator \
github-organization-folder \
pipeline-stage-view \
git \
subversion \
ssh-slaves \
matrix-auth \
pam-auth \
ldap \
email-ext \
mailer \
ssh \
build-user-vars-plugin \
yet-another-build-visualizer \
rebuild
ENV JENKINS_USER ${user}
ENV JENKINS_PASS ${password}
ENV GIT_PASS ${git_password}
ENV CURRENT_BRANCH ${branch}
ENV JAVA_OPTS -Dhudson,plugins.git.GitSCM.ALLOW_LOCAL_CHECKOUT=true
ENV JAVA_OPTS -Djenkins.install.runSetupWizard=false
COPY init.groovy.d/dev/*.groovy /usr/share/jenkins/ref/init.groovy.d/
VOLUME /var/jenkins_home
Still the problem exists:
Any ideas to fix this?
Any way to make my git directory not "a local directory"?
I just stumbled on this problem too. Based on this answer, you just need to add the env variable when starting the container.
docker run ... --env JAVA_OPTS="-Dhudson.plugins.git.GitSCM.ALLOW_LOCAL_CHECKOUT=true"
The answer for me was in the init.groovy.d/dev/jobs.groovy file
After this line:
def scm = new GitSCM("file:///usr/src/");
I added this line
scm.ALLOW_LOCAL_CHECKOUT=true
Your ENV JAVA_OPTS are overwriting each other. Try putting them on the same line.
ENV JAVA_OPTS -Dhudson,plugins.git.GitSCM.ALLOW_LOCAL_CHECKOUT=true -Djenkins.install.runSetupWizard=false
The reason I say that is I added the below to my dockerfile and it works great.
ENV JAVA_OPTS -Dhudson,plugins.git.GitSCM.ALLOW_LOCAL_CHECKOUT=true

Kibana not connecting to Elasticsearch (["warning","elasticsearch","admin"],"pid":12,"message":"No living connections"})

Context: I have been struggling this whole week to get this stack up and running: filebeat -> kafka -> logstash -> elasticsearch - kibana, each one in its own docker (you will find around 3 or 4 other questions mine here without answer resulted from different tentatives). I have decided to downsize the stack and then move block by block untill i reach a final docker-compose. Then I tried the simplest stack I can imagine to push forward the simplest log I can imagine and I am facing the issue mentioned above in my question.
Issue: I am trying to run straight from command line three docker containers: filebeat, elasticsearch and kibana. When I try to start kibana I get "No living connections". I am following carefully the answer provide in another stackoverflow question. Any clue why I am not able to connect from Kibana container to Elasticsearch container?
Here are all three docker commands:
docker run -d -p 9200:9200 -e "discovery.type=single-node" --volume C:\Dockers\simplest-try\esdata:/usr/share/elasticsearch/data --name elasticsearch_container docker.elastic.co/elasticsearch/elasticsearch:7.5.2
docker run -d --mount type=bind,source=C:\Dockers\simplest-try\filebeat.yml,target=/usr/share/filebeat/filebeat.yml --volume C:\Dockers\simplest-try\mylogs:/mylogs docker.elastic.co/beats/filebeat:7.5.2
docker run -d --name kibana -p 5601:5601 --link elasticsearch_container:elasticsearch_alias -e "ELASTICSEARCH_URL=http://elasticsearch_alias:9200" docker.elastic.co/kibana/kibana:7.5.2
ElasticSearch is up and running:
C:\Dockers\simplest-try>curl localhost:9200
{
"name" : "ffaa2d39a8b2",
"cluster_name" : "docker-cluster",
"cluster_uuid" : "QWYLaAqwSqu76fNwFtZ5AA",
"version" : {
"number" : "7.5.2",
"build_flavor" : "default",
"build_type" : "docker",
"build_hash" : "8bec50e1e0ad29dad5653712cf3bb580cd1afcdf",
"build_date" : "2020-01-15T12:11:52.313576Z",
"build_snapshot" : false,
"lucene_version" : "8.3.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
Kibana container console:
{"type":"log","#timestamp":"2020-02-06T14:53:25Z","tags":["info","plugins-system"],"pid":6,"message":"Setting up [15] plugins: [security,licensing,code,timelion,features,spaces,translations,uiActions,newsfeed,inspector,embeddable,advancedUiActions,expressions,eui_utils,data]"}
{"type":"log","#timestamp":"2020-02-06T14:53:25Z","tags":["info","plugins","security"],"pid":6,"message":"Setting up plugin"}
{"type":"log","#timestamp":"2020-02-06T14:53:25Z","tags":["warning","plugins","security","config"],"pid":6,"message":"Generating a random key for xpack.security.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.security.encryptionKey in kibana.yml"}
{"type":"log","#timestamp":"2020-02-06T14:53:25Z","tags":["warning","plugins","security","config"],"pid":6,"message":"Session cookies will be transmitted over insecure connections. This is not recommended."}
{"type":"log","#timestamp":"2020-02-06T14:53:25Z","tags":["info","plugins","licensing"],"pid":6,"message":"Setting up plugin"}
{"type":"log","#timestamp":"2020-02-06T14:53:25Z","tags":["info","plugins","code"],"pid":6,"message":"Setting up plugin"}
{"type":"log","#timestamp":"2020-02-06T14:53:25Z","tags":["info","plugins","timelion"],"pid":6,"message":"Setting up plugin"}
{"type":"log","#timestamp":"2020-02-06T14:53:25Z","tags":["info","plugins","features"],"pid":6,"message":"Setting up plugin"}
{"type":"log","#timestamp":"2020-02-06T14:53:25Z","tags":["info","plugins","spaces"],"pid":6,"message":"Setting up plugin"}
{"type":"log","#timestamp":"2020-02-06T14:53:25Z","tags":["info","plugins","translations"],"pid":6,"message":"Setting up plugin"}
{"type":"log","#timestamp":"2020-02-06T14:53:25Z","tags":["info","plugins","data"],"pid":6,"message":"Setting up plugin"}
{"type":"log","#timestamp":"2020-02-06T14:53:41Z","tags":["error","elasticsearch","data"],"pid":6,"message":"Request error, retrying\nGET http://elasticsearch:9200/_xpack => getaddrinfo ENOTFOUND elasticsearch elasticsearch:9200"}
{"type":"log","#timestamp":"2020-02-06T14:53:42Z","tags":["warning","legacy-plugins"],"pid":6,"path":"/usr/share/kibana/src/legacy/core_plugins/visualizations","message":"Skipping non-plugin directory at /usr/share/kibana/src/legacy/core_plugins/visualizations"}
{"type":"log","#timestamp":"2020-02-06T14:53:42Z","tags":["info","plugins-system"],"pid":6,"message":"Starting [8] plugins: [security,licensing,code,timelion,features,spaces,translations,data]"}
{"type":"log","#timestamp":"2020-02-06T14:53:42Z","tags":["warning","elasticsearch","data"],"pid":6,"message":"Unable to revive connection: http://elasticsearch:9200/"}
{"type":"log","#timestamp":"2020-02-06T14:53:42Z","tags":["warning","elasticsearch","data"],"pid":6,"message":"No living connections"}
{"type":"log","#timestamp":"2020-02-06T14:53:42Z","tags":["warning","plugins","licensing"],"pid":6,"message":"License information could not be obtained from Elasticsearch for the [data] cluster. Error: No Living connections"}
{"type":"log","#timestamp":"2020-02-06T14:53:43Z","tags":["error","elasticsearch","admin"],"pid":6,"message":"Request error, retrying\nGET http://elasticsearch:9200/.kibana => getaddrinfo ENOTFOUND elasticsearch elasticsearch:9200"}
{"type":"log","#timestamp":"2020-02-06T14:53:43Z","tags":["error","elasticsearch","admin"],"pid":6,"message":"Request error, retrying\nGET http://elasticsearch:9200/.kibana_task_manager => getaddrinfo ENOTFOUND elasticsearch elasticsearch:9200"}
{"type":"log","#timestamp":"2020-02-06T14:53:44Z","tags":["warning","elasticsearch","admin"],"pid":6,"message":"Unable to revive connection: http://elasticsearch:9200/"}
{"type":"log","#timestamp":"2020-02-06T14:53:44Z","tags":["warning","elasticsearch","admin"],"pid":6,"message":"No living connections"}
{"type":"log","#timestamp":"2020-02-06T14:53:44Z","tags":["warning","migrations"],"pid":6,"message":"Unable to connect to Elasticsearch. Error: No Living connections"}
Although not straigt to my question title, here are details about Filebeat:
Filebeat try to harverst my log files
2020-02-06T14:32:23.782Z INFO crawler/crawler.go:72 Loading Inputs: 1
2020-02-06T14:32:23.782Z INFO log/input.go:152 Configured paths: [/mylogs/*.log]
2020-02-06T14:32:23.782Z INFO input/input.go:114 Starting input of type: log; ID: 4094557846902174710
2020-02-06T14:32:23.782Z INFO crawler/crawler.go:106 Loading and starting Inputs completed. Enabled inputs: 1
2020-02-06T14:32:23.788Z INFO log/harvester.go:251 Harvester started for file: /mylogs/y.log
2020-02-06T14:32:23.790Z INFO log/harvester.go:251 Harvester started for file: /mylogs/x.log
filebeat.yml
filebeat.inputs:
- type: log
paths:
- '/mylogs/*.log'
json.message_key: log
json.keys_under_root: true
processors:
- add_docker_metadata: ~
output.elasticsearch:
hosts: ["localhost:9200"]
*** edited
logs after Ibexit's suggestion
2020-02-12T21:33:03.575Z INFO instance/beat.go:610 Home path: [/usr/share/filebeat] Config path: [/usr/share/filebeat] Data path: [/usr/share/filebeat/data] Logs path: [/usr/share/filebeat/logs]
2020-02-12T21:33:03.588Z INFO instance/beat.go:618 Beat ID: d0c71c07-23e0-44e5-b497-195ee9552fe8
2020-02-12T21:33:03.588Z INFO [seccomp] seccomp/seccomp.go:124 Syscall filter successfully installed
2020-02-12T21:33:03.588Z INFO [beat] instance/beat.go:941 Beat info {"system_info": {"beat": {"path": {"config": "/usr/share/filebeat", "data": "/usr/share/filebeat/data", "home": "/usr/share/filebeat", "logs": "/usr/share/filebeat/logs"}, "type": "filebeat", "uuid": "d0c71c07-23e0-44e5-b497-195ee9552fe8"}}}
2020-02-12T21:33:03.588Z INFO [beat] instance/beat.go:950 Build info {"system_info": {"build": {"commit": "a9c141434cd6b25d7a74a9c770be6b70643dc767", "libbeat": "7.5.2", "time": "2020-01-15T11:13:22.000Z", "version": "7.5.2"}}}
2020-02-12T21:33:03.588Z INFO [beat] instance/beat.go:953 Go runtime info {"system_info": {"go": {"os":"linux","arch":"amd64","max_procs":2,"version":"go1.12.12"}}}
2020-02-12T21:33:03.590Z INFO [beat] instance/beat.go:957 Host info {"system_info": {"host": {"architecture":"x86_64","boot_time":"2020-02-12T20:32:39Z","containerized":true,"name":"fcfaea4080e7","ip":["127.0.0.1/8","172.17.0.3/16"],"kernel_version":"4.19.76-linuxkit","mac":["02:42:ac:11:00:03"],"os":{"family":"redhat","platform":"centos","name":"CentOS Linux","version":"7 (Core)","major":7,"minor":7,"patch":1908,"codename":"Core"},"timezone":"UTC","timezone_offset_sec":0}}}
2020-02-12T21:33:03.590Z INFO [beat] instance/beat.go:986 Process info {"system_info": {"process": {"capabilities": {"inheritable":["chown","dac_override","fowner","fsetid","kill","setgid","setuid","setpcap","net_bind_service","net_raw","sys_chroot","mknod","audit_write","setfcap"],"permitted":null,"effective":null,"bounding":["chown","dac_override","fowner","fsetid","kill","setgid","setuid","setpcap","net_bind_service","net_raw","sys_chroot","mknod","audit_write","setfcap"],"ambient":null}, "cwd": "/usr/share/filebeat", "exe": "/usr/share/filebeat/filebeat", "name": "filebeat", "pid": 1, "ppid": 0, "seccomp": {"mode":"filter","no_new_privs":true}, "start_time": "2020-02-12T21:33:02.690Z"}}}
2020-02-12T21:33:03.590Z INFO instance/beat.go:297 Setup Beat: filebeat; Version: 7.5.2
2020-02-12T21:33:03.590Z INFO [index-management] idxmgmt/std.go:182 Set output.elasticsearch.index to 'filebeat-7.5.2' as ILM is enabled.
2020-02-12T21:33:03.591Z INFO elasticsearch/client.go:171 Elasticsearch url: http://elasticsearch:9200
2020-02-12T21:33:03.591Z INFO [publisher] pipeline/module.go:97 Beat name: fcfaea4080e7
2020-02-12T21:33:03.593Z INFO instance/beat.go:429 filebeat start running.
2020-02-12T21:33:03.593Z INFO [monitoring] log/log.go:118 Starting metrics logging every 30s
2020-02-12T21:33:03.594Z INFO registrar/migrate.go:104 No registry home found. Create: /usr/share/filebeat/data/registry/filebeat
2020-02-12T21:33:03.594Z INFO registrar/migrate.go:112 Initialize registry meta file
2020-02-12T21:33:03.600Z INFO registrar/registrar.go:108 No registry file found under: /usr/share/filebeat/data/registry/filebeat/data.json. Creating a new registry file.
2020-02-12T21:33:03.611Z INFO registrar/registrar.go:145 Loading registrar data from /usr/share/filebeat/data/registry/filebeat/data.json
2020-02-12T21:33:03.611Z INFO registrar/registrar.go:152 States Loaded from registrar: 0
2020-02-12T21:33:03.612Z INFO crawler/crawler.go:72 Loading Inputs: 1
2020-02-12T21:33:03.612Z INFO log/input.go:152 Configured paths: [/mylogs/*.log]
2020-02-12T21:33:03.612Z INFO input/input.go:114 Starting input of type: log; ID: 4094557846902174710
2020-02-12T21:33:03.612Z INFO crawler/crawler.go:106 Loading and starting Inputs completed. Enabled inputs: 1
2020-02-12T21:33:03.640Z INFO log/harvester.go:251 Harvester started for file: /mylogs/b.log
2020-02-12T21:33:03.640Z ERROR readjson/json.go:52 Error decoding JSON: invalid character '\'' looking for beginning of object key string
2020-02-12T21:33:03.642Z INFO log/harvester.go:251 Harvester started for file: /mylogs/c.log
2020-02-12T21:33:03.644Z INFO log/harvester.go:251 Harvester started for file: /mylogs/w.log
2020-02-12T21:33:03.645Z ERROR readjson/json.go:52 Error decoding JSON: invalid character 'q' looking for beginning of value
2020-02-12T21:33:03.645Z INFO log/harvester.go:251 Harvester started for file: /mylogs/x.log
2020-02-12T21:33:03.652Z INFO log/harvester.go:251 Harvester started for file: /mylogs/y.log
2020-02-12T21:33:04.654Z INFO pipeline/output.go:95 Connecting to backoff(elasticsearch(http://elasticsearch:9200))
2020-02-12T21:33:04.684Z INFO elasticsearch/client.go:753 Attempting to connect to Elasticsearch version 7.5.2
2020-02-12T21:33:04.720Z INFO [index-management] idxmgmt/std.go:256 Auto ILM enable success.
2020-02-12T21:33:04.724Z INFO [index-management.ilm] ilm/std.go:138 do not generate ilm policy: exists=true, overwrite=false
2020-02-12T21:33:04.724Z INFO [index-management] idxmgmt/std.go:269 ILM policy successfully loaded.
2020-02-12T21:33:04.725Z INFO [index-management] idxmgmt/std.go:408 Set setup.template.name to '{filebeat-7.5.2 {now/d}-000001}' as ILM is enabled.
2020-02-12T21:33:04.725Z INFO [index-management] idxmgmt/std.go:413 Set setup.template.pattern to 'filebeat-7.5.2-*' as ILM is enabled.
2020-02-12T21:33:04.725Z INFO [index-management] idxmgmt/std.go:447 Set settings.index.lifecycle.rollover_alias in template to {filebeat-7.5.2 {now/d}-000001} as ILM is enabled.
2020-02-12T21:33:04.725Z INFO [index-management] idxmgmt/std.go:451 Set settings.index.lifecycle.name in template to {filebeat-7.5.2 {"policy":{"phases":{"hot":{"actions":{"rollover":{"max_age":"30d","max_size":"50gb"}}}}}}} as ILM is enabled.
2020-02-12T21:33:04.730Z INFO template/load.go:89 Template filebeat-7.5.2 already exists and will not be overwritten.
2020-02-12T21:33:04.730Z INFO [index-management] idxmgmt/std.go:293 Loaded index template.
2020-02-12T21:33:04.734Z INFO [index-management] idxmgmt/std.go:304 Write alias successfully generated.
2020-02-12T21:33:04.736Z INFO pipeline/output.go:105 Connection to backoff(elasticsearch(http://elasticsearch:9200)) established
2020-02-12T21:33:33.595Z INFO [monitoring] log/log.go:145 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":50,"time":{"ms":50}},"total":{"ticks":100,"time":{"ms":107},"value":100},"user":{"ticks":50,"time":{"ms":57}}},"handles":{"limit":{"hard":1048576,"soft":1048576},"open":11},"info":{"ephemeral_id":"d28c1982-c6bd-43b4-bfbb-c439f909b057","uptime":{"ms":30060}},"memstats":{"gc_next":8351264,"memory_alloc":4760176,"memory_total":12037984,"rss":43970560},"runtime":{"goroutines":42}},"filebeat":{"events":{"added":8,"done":8},"harvester":{"open_files":5,"running":5,"started":5}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"acked":3,"batches":1,"total":3},"read":{"bytes":2942},"type":"elasticsearch","write":{"bytes":2545}},"pipeline":{"clients":1,"events":{"active":0,"filtered":5,"published":3,"retry":3,"total":8},"queue":{"acked":3}}},"registrar":{"states":{"current":5,"update":8},"writes":{"success":7,"total":7}},"system":{"cpu":{"cores":2},"load":{"1":0.02,"15":0.08,"5":0.1,"norm":{"1":0.01,"15":0.04,"5":0.05}}}}}}
2020-02-12T21:33:58.657Z ERROR readjson/json.go:52 Error decoding JSON: invalid character 'E' looking for beginning of value
2020-02-12T21:33:58.657Z ERROR readjson/json.go:52 Error decoding JSON: invalid character 'a' looking for beginning of value
2020-02-12T21:34:03.596Z INFO [monitoring] log/log.go:145 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":60,"time":{"ms":13}},"total":{"ticks":120,"time":{"ms":16},"value":120},"user":{"ticks":60,"time":{"ms":3}}},"handles":{"limit":{"hard":1048576,"soft":1048576},"open":11},"info":{"ephemeral_id":"d28c1982-c6bd-43b4-bfbb-c439f909b057","uptime":{"ms":60059}},"memstats":{"gc_next":8351264,"memory_alloc":5345000,"memory_total":12622808},"runtime":{"goroutines":42}},"filebeat":{"events":{"added":2,"done":2},"harvester":{"open_files":5,"running":5}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"acked":2,"batches":1,"total":2},"read":{"bytes":351},"write":{"bytes":1062}},"pipeline":{"clients":1,"events":{"active":0,"published":2,"total":2},"queue":{"acked":2}}},"registrar":{"states":{"current":5,"update":2},"writes":{"success":1,"total":1}},"system":{"load":{"1":0.01,"15":0.08,"5":0.09,"norm":{"1":0.005,"15":0.04,"5":0.045}}}}}}
2020-02-12T21:34:33.599Z INFO [monitoring] log/log.go:145 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":70,"time":{"ms":10}},"total":{"ticks":130,"time":{"ms":14},"value":130},"user":{"ticks":60,"time":{"ms":4}}},"handles":{"limit":{"hard":1048576,"soft":1048576},"open":11},"info":{"ephemeral_id":"d28c1982-c6bd-43b4-bfbb-c439f909b057","uptime":{"ms":90059}},"memstats":{"gc_next":8351264,"memory_alloc":5714936,"memory_total":12992744,"rss":380928},"runtime":{"goroutines":42}},"filebeat":{"harvester":{"open_files":5,"running":5}},"libbeat":{"config":{"module":{"running":0}},"pipeline":{"clients":1,"events":{"active":0}}},"registrar":{"states":{"current":5}},"system":{"load":{"1":0.07,"15":0.08,"5":0.1,"norm":{"1":0.035,"15":0.04,"5":0.05}}}}}}
2020-02-12T21:34:33.686Z INFO log/harvester.go:251 Harvester started for file: /mylogs/d.log
2020-02-12T21:35:03.597Z INFO [monitoring] log/log.go:145 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":80,"time":{"ms":16}},"total":{"ticks":140,"time":{"ms":21},"value":140},"user":{"ticks":60,"time":{"ms":5}}},"handles":{"limit":{"hard":1048576,"soft":1048576},"open":12},"info":{"ephemeral_id":"d28c1982-c6bd-43b4-bfbb-c439f909b057","uptime":{"ms":120059}},"memstats":{"gc_next":8351264,"memory_alloc":6130552,"memory_total":13408360},"runtime":{"goroutines":46}},"filebeat":{"events":{"added":1,"done":1},"harvester":{"open_files":6,"running":6,"started":1}},"libbeat":{"config":{"module":{"running":0}},"pipeline":{"clients":1,"events":{"active":0,"filtered":1,"total":1}}},"registrar":{"states":{"current":6,"update":1},"writes":{"success":1,"total":1}},"system":{"load":{"1":0.15,"15":0.09,"5":0.12,"norm":{"1":0.075,"15":0.045,"5":0.06}}}}}}
2020-02-12T21:35:33.596Z INFO [monitoring] log/log.go:145 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":100,"time":{"ms":14}},"total":{"ticks":170,"time":{"ms":23},"value":170},"user":{"ticks":70,"time":{"ms":9}}},"handles":{"limit":{"hard":1048576,"soft":1048576},"open":12},"info":{"ephemeral_id":"d28c1982-c6bd-43b4-bfbb-c439f909b057","uptime":{"ms":150060}},"memstats":{"gc_next":7948720,"memory_alloc":4110408,"memory_total":13866968},"runtime":{"goroutines":46}},"filebeat":{"harvester":{"open_files":6,"running":6}},"libbeat":{"config":{"module":{"running":0}},"pipeline":{"clients":1,"events":{"active":0}}},"registrar":{"states":{"current":6}},"system":{"load":{"1":0.09,"15":0.08,"5":0.11,"norm":{"1":0.045,"15":0.04,"5":0.055}}}}}}
2020-02-12T21:36:03.597Z INFO [monitoring] log/log.go:145 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":110,"time":{"ms":7}},"total":{"ticks":190,"time":{"ms":9},"value":190},"user":{"ticks":80,"time":{"ms":2}}},"handles":{"limit":{"hard":1048576,"soft":1048576},"open":12},"info":{"ephemeral_id":"d28c1982-c6bd-43b4-bfbb-c439f909b057","uptime":{"ms":180059}},"memstats":{"gc_next":7948720,"memory_alloc":4399584,"memory_total":14156144},"runtime":{"goroutines":46}},"filebeat":{"harvester":{"open_files":6,"running":6}},"libbeat":{"config":{"module":{"running":0}},"pipeline":{"clients":1,"events":{"active":0}}},"registrar":{"states":{"current":6}},"system":{"load":{"1":0.38,"15":0.11,"5":0.18,"norm":{"1":0.19,"15":0.055,"5":0.09}}}}}}
2020-02-12T21:36:33.596Z INFO [monitoring] log/log.go:145 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":120,"time":{"ms":11}},"total":{"ticks":200,"time":{"ms":15},"value":200},"user":{"ticks":80,"time":{"ms":4}}},"handles":{"limit":{"hard":1048576,"soft":1048576},"open":12},"info":{"ephemeral_id":"d28c1982-c6bd-43b4-bfbb-c439f909b057","uptime":{"ms":210059}},"memstats":{"gc_next":7948720,"memory_alloc":4776320,"memory_total":14532880},"runtime":{"goroutines":46}},"filebeat":{"harvester":{"open_files":6,"running":6}},"libbeat":{"config":{"module":{"running":0}},"pipeline":{"clients":1,"events":{"active":0}}},"registrar":{"states":{"current":6}},"system":{"load":{"1":0.23,"15":0.1,"5":0.16,"norm":{"1":0.115,"15":0.05,"5":0.08}}}}}}
2020-02-12T21:37:03.600Z INFO [monitoring] log/log.go:145 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":120,"time":{"ms":9}},"total":{"ticks":210,"time":{"ms":16},"value":210},"user":{"ticks":90,"time":{"ms":7}}},"handles":{"limit":{"hard":1048576,"soft":1048576},"open":12},"info":{"ephemeral_id":"d28c1982-c6bd-43b4-bfbb-c439f909b057","uptime":{"ms":240059}},"memstats":{"gc_next":7948720,"memory_alloc":5142416,"memory_total":14898976},"runtime":{"goroutines":46}},"filebeat":{"harvester":{"open_files":6,"running":6}},"libbeat":{"config":{"module":{"running":0}},"pipeline":{"clients":1,"events":{"active":0}}},"registrar":{"states":{"current":6}},"system":{"load":{"1":0.14,"15":0.1,"5":0.14,"norm":{"1":0.07,"15":0.05,"5":0.07}}}}}}
2020-02-12T21:37:33.596Z INFO [monitoring] log/log.go:145 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":140,"time":{"ms":12}},"total":{"ticks":240,"time":{"ms":24},"value":240},"user":{"ticks":100,"time":{"ms":12}}},"handles":{"limit":{"hard":1048576,"soft":1048576},"open":12},"info":{"ephemeral_id":"d28c1982-c6bd-43b4-bfbb-c439f909b057","uptime":{"ms":270060}},"memstats":{"gc_next":7946160,"memory_alloc":4111832,"memory_total":15348288},"runtime":{"goroutines":46}},"filebeat":{"harvester":{"open_files":6,"running":6}},"libbeat":{"config":{"module":{"running":0}},"pipeline":{"clients":1,"events":{"active":0}}},"registrar":{"states":{"current":6}},"system":{"load":{"1":0.08,"15":0.09,"5":0.13,"norm":{"1":0.04,"15":0.045,"5":0.065}}}}}}
2020-02-12T21:38:03.596Z INFO [monitoring] log/log.go:145 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":150,"time":{"ms":11}},"total":{"ticks":250,"time":{"ms":12},"value":250},"user":{"ticks":100,"time":{"ms":1}}},"handles":{"limit":{"hard":1048576,"soft":1048576},"open":12},"info":{"ephemeral_id":"d28c1982-c6bd-43b4-bfbb-c439f909b057","uptime":{"ms":300060}},"memstats":{"gc_next":7946160,"memory_alloc":4489960,"memory_total":15726416},"runtime":{"goroutines":46}},"filebeat":{"harvester":{"open_files":6,"running":6}},"libbeat":{"config":{"module":{"running":0}},"pipeline":{"clients":1,"events":{"active":0}}},"registrar":{"states":{"current":6}},"system":{"load":{"1":0.1,"15":0.09,"5":0.13,"norm":{"1":0.05,"15":0.045,"5":0.065}}}}}}
2020-02-12T21:38:08.676Z INFO log/harvester.go:276 File is inactive: /mylogs/w.log. Closing because close_inactive of 5m0s reached.
2020-02-12T21:38:08.676Z INFO log/harvester.go:276 File is inactive: /mylogs/c.log. Closing because close_inactive of 5m0s reached.
2020-02-12T21:38:08.678Z INFO log/harvester.go:276 File is inactive: /mylogs/b.log. Closing because close_inactive of 5m0s reached.
2020-02-12T21:38:08.678Z INFO log/harvester.go:276 File is inactive: /mylogs/y.log. Closing because close_inactive of 5m0s reached.
2020-02-12T21:38:13.706Z INFO log/harvester.go:251 Harvester started for file: /mylogs/y.log
2020-02-12T21:38:33.594Z INFO [monitoring] log/log.go:145 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":150,"time":{"ms":5}},"total":{"ticks":250,"time":{"ms":9},"value":250},"user":{"ticks":100,"time":{"ms":4}}},"handles":{"limit":{"hard":1048576,"soft":1048576},"open":9},"info":{"ephemeral_id":"d28c1982-c6bd-43b4-bfbb-c439f909b057","uptime":{"ms":330059}},"memstats":{"gc_next":7946160,"memory_alloc":5014240,"memory_total":16250696},"runtime":{"goroutines":34}},"filebeat":{"events":{"added":5,"done":5},"harvester":{"closed":4,"open_files":3,"running":3,"started":1}},"libbeat":{"config":{"module":{"running":0}},"pipeline":{"clients":1,"events":{"active":0,"filtered":5,"total":5}}},"registrar":{"states":{"current":6,"update":5},"writes":{"success":5,"total":5}},"system":{"load":{"1":0.88,"15":0.15,"5":0.31,"norm":{"1":0.44,"15":0.075,"5":0.155}}}}}}
2020-02-12T21:39:03.595Z INFO [monitoring] log/log.go:145 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":160,"time":{"ms":6}},"total":{"ticks":270,"time":{"ms":8},"value":270},"user":{"ticks":110,"time":{"ms":2}}},"handles":{"limit":{"hard":1048576,"soft":1048576},"open":9},"info":{"ephemeral_id":"d28c1982-c6bd-43b4-bfbb-c439f909b057","uptime":{"ms":360059}},"memstats":{"gc_next":7946160,"memory_alloc":5284712,"memory_total":16521168},"runtime":{"goroutines":34}},"filebeat":{"harvester":{"open_files":3,"running":3}},"libbeat":{"config":{"module":{"running":0}},"pipeline":{"clients":1,"events":{"active":0}}},"registrar":{"states":{"current":6}},"system":{"load":{"1":0.68,"15":0.16,"5":0.31,"norm":{"1":0.34,"15":0.08,"5":0.155}}}}}}
2020-02-12T21:39:03.676Z INFO log/harvester.go:276 File is inactive: /mylogs/x.log. Closing because close_inactive of 5m0s reached.
2020-02-12T21:39:33.596Z INFO [monitoring] log/log.go:145 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":160,"time":{"ms":5}},"total":{"ticks":270,"time":{"ms":12},"value":270},"user":{"ticks":110,"time":{"ms":7}}},"handles":{"limit":{"hard":1048576,"soft":1048576},"open":8},"info":{"ephemeral_id":"d28c1982-c6bd-43b4-bfbb-c439f909b057","uptime":{"ms":390059}},"memstats":{"gc_next":7666032,"memory_alloc":3879448,"memory_total":16793464},"runtime":{"goroutines":30}},"filebeat":{"events":{"added":1,"done":1},"harvester":{"closed":1,"open_files":2,"running":2}},"libbeat":{"config":{"module":{"running":0}},"pipeline":{"clients":1,"events":{"active":0,"filtered":1,"total":1}}},"registrar":{"states":{"current":6,"update":1},"writes":{"success":1,"total":1}},"system":{"load":{"1":0.48,"15":0.16,"5":0.3,"norm":{"1":0.24,"15":0.08,"5":0.15}}}}}}
2020-02-12T21:39:38.705Z INFO log/harvester.go:276 File is inactive: /mylogs/d.log. Closing because close_inactive of 5m0s reached.
2020-02-12T21:39:43.714Z INFO log/harvester.go:251 Harvester started for file: /mylogs/d.log
2020-02-12T21:39:43.715Z ERROR readjson/json.go:52 Error decoding JSON: EOF
2020-02-12T21:39:49.724Z INFO log/harvester.go:264 File was truncated. Begin reading file from offset 0: /mylogs/d.log
2020-02-12T21:39:53.720Z INFO log/harvester.go:251 Harvester started for file: /mylogs/d.log
2020-02-12T21:39:53.721Z ERROR readjson/json.go:52 Error decoding JSON: EOF
2020-02-12T21:40:03.597Z INFO [monitoring] log/log.go:145 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":190,"time":{"ms":30}},"total":{"ticks":320,"time":{"ms":46},"value":320},"user":{"ticks":130,"time":{"ms":16}}},"handles":{"limit":{"hard":1048576,"soft":1048576},"open":8},"info":{"ephemeral_id":"d28c1982-c6bd-43b4-bfbb-c439f909b057","uptime":{"ms":420059}},"memstats":{"gc_next":7666032,"memory_alloc":4930512,"memory_total":17844528},"runtime":{"goroutines":30}},"filebeat":{"events":{"added":8,"done":8},"harvester":{"closed":2,"open_files":2,"running":2,"started":2},"input":{"log":{"files":{"truncated":1}}}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"acked":4,"batches":2,"total":4},"read":{"bytes":702},"write":{"bytes":2270}},"pipeline":{"clients":1,"events":{"active":0,"filtered":4,"published":4,"total":8},"queue":{"acked":4}}},"registrar":{"states":{"current":6,"update":8},"writes":{"success":6,"total":6}},"system":{"load":{"1":0.59,"15":0.17,"5":0.33,"norm":{"1":0.295,"15":0.085,"5":0.165}}}}}}
The Problem is that the three container are separated in terms of networking from each other and/or misconfigured. Let us discuss what is actualy happening and how to fix it:
1. Elasticsearch
You are starting an elasticsearch container named elasticsearch_container:
docker run -d -p 9200:9200 -e "discovery.type=single-node" --volume C:\Dockers\simplest-try\esdata:/usr/share/elasticsearch/data --name elasticsearch_container docker.elastic.co/elasticsearch/elasticsearch:7.5.2
So far, so good.
2. Filebeat
As mentioned at the beginning, the containers are separated from each other. In order to make elasticsearch visible for filebeat, you need to create a link:
docker run -d --link elasticsearch_container:elasticsearch --mount type=bind,source=C:\Dockers\simplest-try\filebeat.yml,target=/usr/share/filebeat/filebeat.yml --volume C:\Dockers\simplest-try\mylogs:/mylogs docker.elastic.co/beats/filebeat:7.5.2
Please note the container link: --link elasticsearch_container:elasticsearch which is the key here. Now, as the elasticsearch_container is visible to filebeat under the name elasticsearch, we need to change the filebeat.yml in that way:
output.elasticsearch:
hosts: ["http://elasticsearch:9200"]
Using localhost here is meant from the perspective of the filebeat container, which is unaware of the docker host. So localhost within the filebeat container adresses the filebeat container itself. But with the configuration change above, we changed it to the name of the linked elasticsearch container, what should do the trick.
3. Kibana
Kibana is complaining about missing connection to elasticsearch:
Unable to revive connection: http://elasticsearch:9200
Here it's the same case as for filebeat: elasticsearch is not visible to the kibana container under the name elasticsearch but elasticsearch_alias. Additionally, ELASTICSEARCH_URL is not an expected configuration in the version you are using. elasticsearch.hosts is the correct setting and defaults to http://elasticsearch:9200. And this is the root of the error message: kibana is not recognising ELASTICSEARCH_URL, falls back to the default value and fails because elasticsearch_container is linked as elasticsearch_alias and not as elasticsearch. Fixing this is easy, as we need just to remove ELASTICSEARCH_URL and let kibana fall back to the default. To make elasticsearch visible to kibana, we just apply the same link as we did for filebeat already:
docker run -d --name kibana -p 5601:5601 --link elasticsearch_container:elasticsearch docker.elastic.co/kibana/kibana:7.5.2
Important:
Please dispose (stop & remove) the old container instances as they are claiming the container names before executing the discussed changes.

WIldfly Docker 8.2 version stuck in startup

We are migrating our legacy application to Widlfly8.2 docker version.
I am trying to copy sqlserver,mysql and postgres driver under /modules. When I am starting my server it's getting hung. Here is the output of log.
Starting /opt/jboss/wildfly/bin/standalone.sh -b 0.0.0.0 -bmanagement 0.0.0.0...
=========================================================================
JBoss Bootstrap Environment
JBOSS_HOME: /opt/jboss/wildfly
JAVA: /usr/lib/jvm/java/bin/java
JAVA_OPTS: -server -XX:+UseCompressedOops -server -XX:+UseCompressedOops -Xms64m -Xmx512m -XX:MaxPermSize=256m -Djava.net.preferIPv4Stack=true -Djboss.modules.system.pkgs=org.jboss.byteman -Djava.awt.headless=true
=========================================================================
OpenJDK 64-Bit Server VM warning: ignoring option MaxPermSize=256m; support was removed in 8.0
16:29:24,256 INFO [org.jboss.modules] (main) JBoss Modules version 1.3.3.Final
16:29:24,699 INFO [org.jboss.msc] (main) JBoss MSC version 1.2.2.Final
16:29:24,889 INFO [org.jboss.as] (MSC service thread 1-6) JBAS015899: WildFly 8.2.1.Final "Tweek" starting
16:29:26,992 INFO [org.jboss.as.server] (Controller Boot Thread) JBAS015888: Creating http management service using socket-binding (management-http)
16:29:27,045 INFO [org.xnio] (MSC service thread 1-3) XNIO version 3.3.0.Final
16:29:27,073 INFO [org.xnio.nio] (MSC service thread 1-3) XNIO NIO Implementation Version 3.3.0.Final
any thought why it's getting stuck ?
Thanks
Rakesh

Resources