I am running a Jenkins server based on https://hub.docker.com/r/jenkins/jenkins LTS. While running version 2.332.1, I have noticed the following exceptions during initialization, which I have not seen before.
Any help regarding how to solve this is highly appreciated.
Attaching to jenkins-server_jenkins_1
jenkins_1 | Running from: /usr/share/jenkins/jenkins.war
jenkins_1 | webroot: EnvVars.masterEnvVars.get("JENKINS_HOME")
jenkins_1 | 2022-05-03 12:24:54.755+0000 [id=1] INFO org.eclipse.jetty.util.log.Log#initialized: Logging initialized #407ms to org.eclipse.jetty.util.log.JavaUtilLog
jenkins_1 | 2022-05-03 12:24:54.828+0000 [id=1] INFO winstone.Logger#logInternal: Beginning extraction from war file
jenkins_1 | 2022-05-03 12:24:54.855+0000 [id=1] WARNING o.e.j.s.handler.ContextHandler#setContextPath: Empty contextPath
jenkins_1 | 2022-05-03 12:24:54.907+0000 [id=1] INFO org.eclipse.jetty.server.Server#doStart: jetty-9.4.43.v20210629; built: 2021-06-30T11:07:22.254Z; git: 526006ecfa3af7f1a27ef3a288e2bef7ea9dd7e8; jvm 11.0.14.1+1
jenkins_1 | 2022-05-03 12:24:55.131+0000 [id=1] INFO o.e.j.w.StandardDescriptorProcessor#visitServlet: NO JSP Support for /, did not find org.eclipse.jetty.jsp.JettyJspServlet
jenkins_1 | 2022-05-03 12:24:55.163+0000 [id=1] INFO o.e.j.s.s.DefaultSessionIdManager#doStart: DefaultSessionIdManager workerName=node0
jenkins_1 | 2022-05-03 12:24:55.163+0000 [id=1] INFO o.e.j.s.s.DefaultSessionIdManager#doStart: No SessionScavenger set, using defaults
jenkins_1 | 2022-05-03 12:24:55.164+0000 [id=1] INFO o.e.j.server.session.HouseKeeper#startScavenging: node0 Scavenging every 600000ms
jenkins_1 | 2022-05-03 12:24:55.548+0000 [id=1] INFO hudson.WebAppMain#contextInitialized: Jenkins home directory: /var/jenkins_home found at: EnvVars.masterEnvVars.get("JENKINS_HOME")
jenkins_1 | 2022-05-03 12:24:55.707+0000 [id=1] INFO o.e.j.s.handler.ContextHandler#doStart: Started w.#74cf8b28{Jenkins v2.332.2,/,file:///var/jenkins_home/war/,AVAILABLE}{/var/jenkins_home/war}
jenkins_1 | 2022-05-03 12:24:55.735+0000 [id=1] INFO o.e.j.server.AbstractConnector#doStart: Started ServerConnector#6c6cb480{HTTP/1.1, (http/1.1)}{0.0.0.0:8080}
jenkins_1 | 2022-05-03 12:24:55.735+0000 [id=1] INFO org.eclipse.jetty.server.Server#doStart: Started #1389ms
jenkins_1 | 2022-05-03 12:24:55.744+0000 [id=24] INFO winstone.Logger#logInternal: Winstone Servlet Engine running: controlPort=disabled
jenkins_1 | 2022-05-03 12:24:55.975+0000 [id=30] INFO jenkins.InitReactorRunner$1#onAttained: Started initialization
jenkins_1 | 2022-05-03 12:24:56.145+0000 [id=29] INFO hudson.ClassicPluginStrategy#createPluginWrapper: Plugin discard-old-build.jpi is disabled
jenkins_1 | 2022-05-03 12:24:56.231+0000 [id=29] INFO jenkins.InitReactorRunner$1#onAttained: Listed all plugins
jenkins_1 | 2022-05-03 12:24:58.245+0000 [id=34] WARNING hudson.ExtensionFinder$Sezpoz#scout: Failed to scout org.jenkinsci.plugins.pipeline.modeldefinition.endpoints.ModelConverterAction
jenkins_1 | java.lang.ClassNotFoundException: com.github.fge.jsonschema.tree.JsonTree
jenkins_1 | at org.apache.tools.ant.AntClassLoader.findClassInComponents(AntClassLoader.java:1402)
jenkins_1 | at org.apache.tools.ant.AntClassLoader.findClass(AntClassLoader.java:1357)
jenkins_1 | at org.apache.tools.ant.AntClassLoader.loadClass(AntClassLoader.java:1112)
jenkins_1 | at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:522)
jenkins_1 | Caused: java.lang.NoClassDefFoundError: com/github/fge/jsonschema/tree/JsonTree
jenkins_1 | at java.base/java.lang.Class.forName0(Native Method)
jenkins_1 | at java.base/java.lang.Class.forName(Class.java:398)
jenkins_1 | at hudson.ExtensionFinder$Sezpoz.scout(ExtensionFinder.java:730)
jenkins_1 | at hudson.ClassicPluginStrategy.findComponents(ClassicPluginStrategy.java:352)
jenkins_1 | at hudson.ExtensionList.load(ExtensionList.java:384)
jenkins_1 | at hudson.ExtensionList.ensureLoaded(ExtensionList.java:320)
jenkins_1 | at hudson.ExtensionList.getComponents(ExtensionList.java:184)
jenkins_1 | at jenkins.model.Jenkins$6.onInitMilestoneAttained(Jenkins.java:1188)
jenkins_1 | at jenkins.InitReactorRunner$1.onAttained(InitReactorRunner.java:88)
jenkins_1 | at org.jvnet.hudson.reactor.ReactorListener$Aggregator.lambda$onAttained$3(ReactorListener.java:108)
jenkins_1 | at org.jvnet.hudson.reactor.ReactorListener$Aggregator.run(ReactorListener.java:115)
jenkins_1 | at org.jvnet.hudson.reactor.ReactorListener$Aggregator.onAttained(ReactorListener.java:108)
jenkins_1 | at org.jvnet.hudson.reactor.Reactor$1.run(Reactor.java:183)
jenkins_1 | at org.jvnet.hudson.reactor.Reactor$Node.run(Reactor.java:121)
jenkins_1 | at jenkins.security.ImpersonatingExecutorService$1.run(ImpersonatingExecutorService.java:68)
jenkins_1 | at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
jenkins_1 | at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
jenkins_1 | at java.base/java.lang.Thread.run(Thread.java:829)
jenkins_1 | 2022-05-03 12:24:59.388+0000 [id=34] WARNING h.ExtensionFinder$GuiceFinder$SezpozModule#configure: Failed to load org.jenkinsci.plugins.pipeline.modeldefinition.endpoints.ModelConverterAction
jenkins_1 | java.lang.ClassNotFoundException: com.github.fge.jsonschema.tree.JsonTree
jenkins_1 | at org.apache.tools.ant.AntClassLoader.findClassInComponents(AntClassLoader.java:1402)
jenkins_1 | at org.apache.tools.ant.AntClassLoader.findClass(AntClassLoader.java:1357)
jenkins_1 | at org.apache.tools.ant.AntClassLoader.loadClass(AntClassLoader.java:1112)
jenkins_1 | at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:522)
jenkins_1 | Caused: java.lang.NoClassDefFoundError: com/github/fge/jsonschema/tree/JsonTree
jenkins_1 | at java.base/java.lang.Class.getDeclaredConstructors0(Native Method)
jenkins_1 | at java.base/java.lang.Class.privateGetDeclaredConstructors(Class.java:3137)
jenkins_1 | at java.base/java.lang.Class.getDeclaredConstructors(Class.java:2357)
jenkins_1 | at hudson.ExtensionFinder$GuiceFinder$SezpozModule.resolve(ExtensionFinder.java:501)
jenkins_1 | at hudson.ExtensionFinder$GuiceFinder$SezpozModule.resolve(ExtensionFinder.java:487)
jenkins_1 | at hudson.ExtensionFinder$GuiceFinder$SezpozModule.configure(ExtensionFinder.java:531)
jenkins_1 | at com.google.inject.AbstractModule.configure(AbstractModule.java:64)
jenkins_1 | at com.google.inject.spi.Elements$RecordingBinder.install(Elements.java:409)
jenkins_1 | at com.google.inject.spi.Elements.getElements(Elements.java:108)
jenkins_1 | at com.google.inject.internal.InjectorShell$Builder.build(InjectorShell.java:160)
jenkins_1 | at com.google.inject.internal.InternalInjectorCreator.build(InternalInjectorCreator.java:107)
jenkins_1 | at com.google.inject.Guice.createInjector(Guice.java:87)
jenkins_1 | at com.google.inject.Guice.createInjector(Guice.java:69)
jenkins_1 | at hudson.ExtensionFinder$GuiceFinder.<init>(ExtensionFinder.java:281)
jenkins_1 | at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
jenkins_1 | at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
jenkins_1 | at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
jenkins_1 | at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
jenkins_1 | at java.base/java.lang.Class.newInstance(Class.java:584)
jenkins_1 | at net.java.sezpoz.IndexItem.instance(IndexItem.java:181)
jenkins_1 | at hudson.ExtensionFinder$Sezpoz._find(ExtensionFinder.java:706)
jenkins_1 | at hudson.ExtensionFinder$Sezpoz.find(ExtensionFinder.java:692)
jenkins_1 | at hudson.ClassicPluginStrategy.findComponents(ClassicPluginStrategy.java:358)
jenkins_1 | at hudson.ExtensionList.load(ExtensionList.java:384)
jenkins_1 | at hudson.ExtensionList.ensureLoaded(ExtensionList.java:320)
jenkins_1 | at hudson.ExtensionList.getComponents(ExtensionList.java:184)
jenkins_1 | at jenkins.model.Jenkins$6.onInitMilestoneAttained(Jenkins.java:1188)
jenkins_1 | at jenkins.InitReactorRunner$1.onAttained(InitReactorRunner.java:88)
jenkins_1 | at org.jvnet.hudson.reactor.ReactorListener$Aggregator.lambda$onAttained$3(ReactorListener.java:108)
jenkins_1 | at org.jvnet.hudson.reactor.ReactorListener$Aggregator.run(ReactorListener.java:115)
jenkins_1 | at org.jvnet.hudson.reactor.ReactorListener$Aggregator.onAttained(ReactorListener.java:108)
jenkins_1 | at org.jvnet.hudson.reactor.Reactor$1.run(Reactor.java:183)
jenkins_1 | at org.jvnet.hudson.reactor.Reactor$Node.run(Reactor.java:121)
jenkins_1 | at jenkins.security.ImpersonatingExecutorService$1.run(ImpersonatingExecutorService.java:68)
jenkins_1 | at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
jenkins_1 | at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
jenkins_1 | at java.base/java.lang.Thread.run(Thread.java:829)
jenkins_1 | 2022-05-03 12:25:00.320+0000 [id=34] INFO jenkins.InitReactorRunner$1#onAttained: Prepared all plugins
jenkins_1 | 2022-05-03 12:25:00.345+0000 [id=37] INFO jenkins.InitReactorRunner$1#onAttained: Started all plugins
jenkins_1 | 2022-05-03 12:25:00.369+0000 [id=34] INFO jenkins.InitReactorRunner$1#onAttained: Augmented all extensions
How do I find out what causes this and how to correct it?
I found the reason. The "Pipeline: Nodes and Processes" plugin was not updated because Jenkins said it was incompatible. After updating it the exception is gone, and it seems like the whole system is still running
My DockerFile is:
FROM openjdk:8
VOLUME /tmp
ADD target/demo-0.0.1-SNAPSHOT.jar app.jar
#RUN bash -c 'touch /app.jar'
#EXPOSE 8080
ENTRYPOINT ["java","-Dspring.data.mongodb.uri=mongodb://mongo/players","-jar","/app.jar"]
And the docker-compose is:
version: "3"
services:
spring-docker:
build: .
restart: always
ports:
- "8080:8080"
depends_on:
- db
db:
image: mongo
volumes:
- ./data:/data/db
ports:
- "27000:27017"
restart: always
I have docker Image and when I use docker-compose up, anything goes well without any error.
But in the Postman, when I use GET method with localhost:8080/player I do not have any out put, so I used the IP of docker-machine such as 192.168.99.101:8080, but I have error 404 Not found in the Postman.
what is my mistake?!
The docker-compose logs:
$ docker-compose logs
Attaching to thesismongoproject_spring-docker_1, thesismongoproject_db_1
spring-docker_1 |
spring-docker_1 | . ____ _ __ _ _
spring-docker_1 | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
spring-docker_1 | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
spring-docker_1 | \\/ ___)| |_)| | | | | || (_| | ) ) ) )
spring-docker_1 | ' |____| .__|_| |_|_| |_\__, | / / / /
spring-docker_1 | =========|_|==============|___/=/_/_/_/
spring-docker_1 | :: Spring Boot :: (v2.2.6.RELEASE)
spring-docker_1 |
spring-docker_1 | 2020-05-31 11:36:39.598 INFO 1 --- [ main] thesisM
ongoProject.Application : Starting Application v0.0.1-SNAPSHOT on e81c
cff8ba0e with PID 1 (/demo-0.0.1-SNAPSHOT.jar started by root in /)
spring-docker_1 | 2020-05-31 11:36:39.620 INFO 1 --- [ main] thesisM
ongoProject.Application : No active profile set, falling back to defau
lt profiles: default
spring-docker_1 | 2020-05-31 11:36:41.971 INFO 1 --- [ main] .s.d.r.
c.RepositoryConfigurationDelegate : Bootstrapping Spring Data MongoDB repositori
es in DEFAULT mode.
spring-docker_1 | 2020-05-31 11:36:42.216 INFO 1 --- [ main] .s.d.r.
c.RepositoryConfigurationDelegate : Finished Spring Data repository scanning in
225ms. Found 4 MongoDB repository interfaces.
spring-docker_1 | 2020-05-31 11:36:44.319 INFO 1 --- [ main] o.s.b.w
.embedded.tomcat.TomcatWebServer : Tomcat initialized with port(s): 8080 (http)
spring-docker_1 | 2020-05-31 11:36:44.381 INFO 1 --- [ main] o.apach
e.catalina.core.StandardService : Starting service [Tomcat]
spring-docker_1 | 2020-05-31 11:36:44.381 INFO 1 --- [ main] org.apa
che.catalina.core.StandardEngine : Starting Servlet engine: [Apache Tomcat/9.0.
33]
spring-docker_1 | 2020-05-31 11:36:44.619 INFO 1 --- [ main] o.a.c.c
.C.[Tomcat].[localhost].[/] : Initializing Spring embedded WebApplicationC
ontext
spring-docker_1 | 2020-05-31 11:36:44.619 INFO 1 --- [ main] o.s.web
.context.ContextLoader : Root WebApplicationContext: initialization c
ompleted in 4810 ms
spring-docker_1 | 2020-05-31 11:36:46.183 INFO 1 --- [ main] org.mon
godb.driver.cluster : Cluster created with settings {hosts=[db:270
17], mode=SINGLE, requiredClusterType=UNKNOWN, serverSelectionTimeout='30000 ms'
, maxWaitQueueSize=500}
spring-docker_1 | 2020-05-31 11:36:46.781 INFO 1 --- [null'}-db:27017] org.mon
godb.driver.connection : Opened connection [connectionId{localValue:1
, serverValue:1}] to db:27017
spring-docker_1 | 2020-05-31 11:36:46.802 INFO 1 --- [null'}-db:27017] org.mon
godb.driver.cluster : Monitor thread successfully connected to ser
ver with description ServerDescription{address=db:27017, type=STANDALONE, state=
CONNECTED, ok=true, version=ServerVersion{versionList=[4, 2, 7]}, minWireVersion
=0, maxWireVersion=8, maxDocumentSize=16777216, logicalSessionTimeoutMinutes=30,
roundTripTimeNanos=5468915}
spring-docker_1 | 2020-05-31 11:36:48.829 INFO 1 --- [ main] o.s.s.c
oncurrent.ThreadPoolTaskExecutor : Initializing ExecutorService 'applicationTas
kExecutor'
spring-docker_1 | 2020-05-31 11:36:49.546 INFO 1 --- [ main] o.s.b.w
.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8080 (http) with
context path ''
spring-docker_1 | 2020-05-31 11:36:49.581 INFO 1 --- [ main] thesisM
ongoProject.Application : Started Application in 11.264 seconds (JVM r
unning for 13.615)
spring-docker_1 | 2020-05-31 11:40:10.290 INFO 1 --- [extShutdownHook] o.s.s.c
oncurrent.ThreadPoolTaskExecutor : Shutting down ExecutorService 'applicationTa
skExecutor'
db_1 | 2020-05-31T11:36:35.623+0000 I CONTROL [main] Automatically
disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none
'
db_1 | 2020-05-31T11:36:35.639+0000 W ASIO [main] No TransportL
ayer configured during NetworkInterface startup
db_1 | 2020-05-31T11:36:35.645+0000 I CONTROL [initandlisten] Mong
oDB starting : pid=1 port=27017 dbpath=/data/db 64-bit host=1a0e5bc0c503
db_1 | 2020-05-31T11:36:35.646+0000 I CONTROL [initandlisten] db v
ersion v4.2.7
db_1 | 2020-05-31T11:36:35.646+0000 I CONTROL [initandlisten] git
version: 51d9fe12b5d19720e72dcd7db0f2f17dd9a19212
db_1 | 2020-05-31T11:36:35.646+0000 I CONTROL [initandlisten] Open
SSL version: OpenSSL 1.1.1 11 Sep 2018
db_1 | 2020-05-31T11:36:35.646+0000 I CONTROL [initandlisten] allo
cator: tcmalloc
db_1 | 2020-05-31T11:36:35.646+0000 I CONTROL [initandlisten] modu
les: none
db_1 | 2020-05-31T11:36:35.647+0000 I CONTROL [initandlisten] buil
d environment:
db_1 | 2020-05-31T11:36:35.647+0000 I CONTROL [initandlisten]
distmod: ubuntu1804
db_1 | 2020-05-31T11:36:35.647+0000 I CONTROL [initandlisten]
distarch: x86_64
db_1 | 2020-05-31T11:36:35.647+0000 I CONTROL [initandlisten]
target_arch: x86_64
db_1 | 2020-05-31T11:36:35.648+0000 I CONTROL [initandlisten] opti
ons: { net: { bindIp: "*" } }
db_1 | 2020-05-31T11:36:35.649+0000 I STORAGE [initandlisten] Dete
cted data files in /data/db created by the 'wiredTiger' storage engine, so setti
ng the active storage engine to 'wiredTiger'.
db_1 | 2020-05-31T11:36:35.650+0000 I STORAGE [initandlisten] wire
dtiger_open config: create,cache_size=256M,cache_overflow=(file_max=0M),session_
max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(f
ast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager
=(close_idle_time=100000,close_scan_interval=10,close_handle_minimum=250),statis
tics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress],
db_1 | 2020-05-31T11:36:37.046+0000 I STORAGE [initandlisten] Wire
dTiger message [1590924997:46670][1:0x7f393f9a0b00], txn-recover: Recovering log
9 through 10
db_1 | 2020-05-31T11:36:37.231+0000 I STORAGE [initandlisten] Wire
dTiger message [1590924997:231423][1:0x7f393f9a0b00], txn-recover: Recovering lo
g 10 through 10
db_1 | 2020-05-31T11:36:37.294+0000 I STORAGE [initandlisten] Wire
dTiger message [1590924997:294858][1:0x7f393f9a0b00], txn-recover: Main recovery
loop: starting at 9/6016 to 10/256
db_1 | 2020-05-31T11:36:37.447+0000 I STORAGE [initandlisten] Wire
dTiger message [1590924997:447346][1:0x7f393f9a0b00], txn-recover: Recovering lo
g 9 through 10
db_1 | 2020-05-31T11:36:37.564+0000 I STORAGE [initandlisten] Wire
dTiger message [1590924997:564841][1:0x7f393f9a0b00], txn-recover: Recovering lo
g 10 through 10
db_1 | 2020-05-31T11:36:37.645+0000 I STORAGE [initandlisten] Wire
dTiger message [1590924997:645216][1:0x7f393f9a0b00], txn-recover: Set global re
covery timestamp: (0, 0)
db_1 | 2020-05-31T11:36:37.681+0000 I RECOVERY [initandlisten] Wire
dTiger recoveryTimestamp. Ts: Timestamp(0, 0)
db_1 | 2020-05-31T11:36:37.703+0000 I STORAGE [initandlisten] Time
stamp monitor starting
db_1 | 2020-05-31T11:36:37.704+0000 I CONTROL [initandlisten]
db_1 | 2020-05-31T11:36:37.704+0000 I CONTROL [initandlisten] ** W
ARNING: Access control is not enabled for the database.
db_1 | 2020-05-31T11:36:37.704+0000 I CONTROL [initandlisten] **
Read and write access to data and configuration is unrestricted.
db_1 | 2020-05-31T11:36:37.705+0000 I CONTROL [initandlisten]
db_1 | 2020-05-31T11:36:37.712+0000 I SHARDING [initandlisten] Mark
ing collection local.system.replset as collection version: <unsharded>
db_1 | 2020-05-31T11:36:37.722+0000 I STORAGE [initandlisten] Flow
Control is enabled on this deployment.
db_1 | 2020-05-31T11:36:37.722+0000 I SHARDING [initandlisten] Mark
ing collection admin.system.roles as collection version: <unsharded>
db_1 | 2020-05-31T11:36:37.724+0000 I SHARDING [initandlisten] Mark
ing collection admin.system.version as collection version: <unsharded>
db_1 | 2020-05-31T11:36:37.726+0000 I SHARDING [initandlisten] Mark
ing collection local.startup_log as collection version: <unsharded>
db_1 | 2020-05-31T11:36:37.729+0000 I FTDC [initandlisten] Init
ializing full-time diagnostic data capture with directory '/data/db/diagnostic.d
ata'
db_1 | 2020-05-31T11:36:37.740+0000 I SHARDING [LogicalSessionCache
Refresh] Marking collection config.system.sessions as collection version: <unsha
rded>
db_1 | 2020-05-31T11:36:37.748+0000 I SHARDING [LogicalSessionCache
Reap] Marking collection config.transactions as collection version: <unsharded>
db_1 | 2020-05-31T11:36:37.748+0000 I NETWORK [listener] Listening
on /tmp/mongodb-27017.sock
db_1 | 2020-05-31T11:36:37.748+0000 I NETWORK [listener] Listening
on 0.0.0.0
db_1 | 2020-05-31T11:36:37.749+0000 I NETWORK [listener] waiting f
or connections on port 27017
db_1 | 2020-05-31T11:36:38.001+0000 I SHARDING [ftdc] Marking colle
ction local.oplog.rs as collection version: <unsharded>
db_1 | 2020-05-31T11:36:46.536+0000 I NETWORK [listener] connectio
n accepted from 172.19.0.3:40656 #1 (1 connection now open)
db_1 | 2020-05-31T11:36:46.653+0000 I NETWORK [conn1] received cli
ent metadata from 172.19.0.3:40656 conn1: { driver: { name: "mongo-java-driver|l
egacy", version: "3.11.2" }, os: { type: "Linux", name: "Linux", architecture: "
amd64", version: "4.14.154-boot2docker" }, platform: "Java/Oracle Corporation/1.
8.0_252-b09" }
db_1 | 2020-05-31T11:40:10.302+0000 I NETWORK [conn1] end connecti
on 172.19.0.3:40656 (0 connections now open)
db_1 | 2020-05-31T11:40:10.523+0000 I CONTROL [signalProcessingThr
ead] got signal 15 (Terminated), will terminate after current cmd ends
db_1 | 2020-05-31T11:40:10.730+0000 I NETWORK [signalProcessingThr
ead] shutdown: going to close listening sockets...
db_1 | 2020-05-31T11:40:10.731+0000 I NETWORK [listener] removing
socket file: /tmp/mongodb-27017.sock
db_1 | 2020-05-31T11:40:10.731+0000 I - [signalProcessingThr
ead] Stopping further Flow Control ticket acquisitions.
db_1 | 2020-05-31T11:40:10.796+0000 I CONTROL [signalProcessingThr
ead] Shutting down free monitoring
db_1 | 2020-05-31T11:40:10.800+0000 I FTDC [signalProcessingThr
ead] Shutting down full-time diagnostic data capture
db_1 | 2020-05-31T11:40:10.803+0000 I STORAGE [signalProcessingThr
ead] Deregistering all the collections
db_1 | 2020-05-31T11:40:10.811+0000 I STORAGE [signalProcessingThr
ead] Timestamp monitor shutting down
db_1 | 2020-05-31T11:40:10.828+0000 I STORAGE [TimestampMonitor] T
imestamp monitor is stopping due to: interrupted at shutdown
db_1 | 2020-05-31T11:40:10.828+0000 I STORAGE [signalProcessingThr
ead] WiredTigerKVEngine shutting down
db_1 | 2020-05-31T11:40:10.829+0000 I STORAGE [signalProcessingThr
ead] Shutting down session sweeper thread
db_1 | 2020-05-31T11:40:10.829+0000 I STORAGE [signalProcessingThr
ead] Finished shutting down session sweeper thread
db_1 | 2020-05-31T11:40:10.829+0000 I STORAGE [signalProcessingThr
ead] Shutting down journal flusher thread
db_1 | 2020-05-31T11:40:10.916+0000 I STORAGE [signalProcessingThr
ead] Finished shutting down journal flusher thread
db_1 | 2020-05-31T11:40:10.917+0000 I STORAGE [signalProcessingThr
ead] Shutting down checkpoint thread
db_1 | 2020-05-31T11:40:10.917+0000 I STORAGE [signalProcessingThr
ead] Finished shutting down checkpoint thread
db_1 | 2020-05-31T11:40:10.935+0000 I STORAGE [signalProcessingThr
ead] shutdown: removing fs lock...
db_1 | 2020-05-31T11:40:10.942+0000 I CONTROL [signalProcessingThr
ead] now exiting
db_1 | 2020-05-31T11:40:10.943+0000 I CONTROL [signalProcessingThr
ead] shutting down with code:0
for solving this problem I must put #EnableAutoConfiguration(exclude={MongoAutoConfiguration.class}) annotation
I use articulate for building my chatbot. https://github.com/samtecspg/articulate
I get the below error when I execute docker-compose up
api_1 |
api_1 | /usr/src/app/server/index.js:33
api_1 | throw err;
api_1 | ^
api_1 | [cluster_block_exception] blocked by: [FORBIDDEN/12/index read-only / allow delete (api)]; :: {"path":"/document/_mapping/document","query":{},"body":"{\"properties\":{\"document\":{\"type\":\"text\"},\"time_stamp\":{\"type\":\"date\"},\"maximum_saying_score\":{\"type\":\"float\"},\"maximum_category_score\":{\"type\":\"float\"},\"total_elapsed_time_ms\":{\"type\":\"text\"},\"rasa_results\":{\"type\":\"object\"},\"session\":{\"type\":\"text\"},\"agent_id\":{\"type\":\"integer\"},\"agent_model\":{\"type\":\"text\"}}}","statusCode":403,"response":"{\"error\":{\"root_cause\":[{\"type\":\"cluster_block_exception\",\"reason\":\"blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];\"}],\"type\":\"cluster_block_exception\",\"reason\":\"blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];\"},\"status\":403}"}
api_1 | at respond (/usr/src/app/node_modules/elasticsearch/src/lib/transport.js:308:15)
api_1 | at checkRespForFailure (/usr/src/app/node_modules/elasticsearch/src/lib/transport.js:267:7)
api_1 | at HttpConnector.<anonymous> (/usr/src/app/node_modules/elasticsearch/src/lib/connectors/http.js:165:7)
api_1 | at IncomingMessage.wrapper (/usr/src/app/node_modules/lodash/lodash.js:4949:19)
api_1 | at IncomingMessage.emit (events.js:187:15)
api_1 | at IncomingMessage.EventEmitter.emit (domain.js:442:20)
api_1 | at endReadableNT (_stream_readable.js:1081:12)
api_1 | at process._tickCallback (internal/process/next_tick.js:63:19)
api_1 | [nodemon] app crashed - waiting for file changes before starting...
I've cleared 15gb of my space eventhough I get this error.
The indices probably became read-only.
Use the following command:
curl -s -H 'Content-Type: application/json' -XPUT '[IP-server]:9200/_all/_settings?pretty' -d ' {
"index":{
"blocks" : {"read_only_allow_delete":"false"}
}
}'
i did an upgrade on my server for my Docker MARIADB with:
docker-compose pull
docker-compose up -d
My version before:
Server version: 10.2.14-MariaDB-10.2.14+maria~jessie mariadb.org binary distribution
SHOW VARIABLES LIKE "%version%";
+-------------------------+--------------------------------------+
| Variable_name | Value |
+-------------------------+--------------------------------------+
| innodb_version | 5.7.21 |
| protocol_version | 10 |
| slave_type_conversions | |
| version | 10.2.14-MariaDB-10.2.14+maria~jessie |
| version_comment | mariadb.org binary distribution |
| version_compile_machine | x86_64 |
| version_compile_os | debian-linux-gnu |
| version_malloc_library | system |
| version_ssl_library | OpenSSL 1.0.1t 3 May 2016 |
| wsrep_patch_version | wsrep_25.23 |
+-------------------------+--------------------------------------+
My version now:
Server version: 10.3.9-MariaDB-1:10.3.9+maria~bionic mariadb.org binary distribution
+---------------------------------+------------------------------------------+
| Variable_name | Value |
+---------------------------------+------------------------------------------+
| innodb_version | 10.3.9 |
| protocol_version | 10 |
| slave_type_conversions | |
| system_versioning_alter_history | ERROR |
| system_versioning_asof | DEFAULT |
| version | 10.3.9-MariaDB-1:10.3.9+maria~bionic |
| version_comment | mariadb.org binary distribution |
| version_compile_machine | x86_64 |
| version_compile_os | debian-linux-gnu |
| version_malloc_library | system |
| version_source_revision | ca26f91bcaa21933147974c823852a2e1c2e2bd7 |
| version_ssl_library | OpenSSL 1.1.0g 2 Nov 2017 |
| wsrep_patch_version | wsrep_25.23 |
+---------------------------------+------------------------------------------+
So it seems it was a upgrade from 10.2 to 10.3.
Upgrading from MariaDB 10.2 to MariaDB 10.3
Now i get the following error in "docker-compose logs"
2018-09-28 13:03:38 0 [Warning] InnoDB: Table mysql/innodb_table_stats has length mismatch in the column name table_name. Please run mysql_upgrade
2018-09-28 13:03:38 0 [Warning] InnoDB: Table mysql/innodb_index_stats has length mismatch in the column name table_name. Please run mysql_upgrade
The database is working as expected. What to do to get rid of this error?
While I was writing the question I was able to fix it myself. If you also facing this problem:
connect to the docker database container:
docker exec -u 0 -i -t CONTAINER_NAME /bin/bash
run mysql_upgrade like written in the error message:
mysql_upgrade --user=root --password=xxyy --host=localhost
I did a restart of the docker compose with:
docker-compose stop
docker-compose start
I m trying to up the kafka server but I m having this error. I have no idea what is going on. I m running kafka in a docker container , the version that I m using is 1.0.1 . the zookeeper image is the latest...
kafka_1 | waiting for kafka to be ready
kafka_1 | [2018-03-13 12:48:19,886] FATAL (kafka.Kafka$)
kafka_1 | org.apache.kafka.common.config.ConfigException: Invalid value 0version=1.0.1 for configuration group.initial.rebalance.delay.ms: Not a number of type INT
kafka_1 | at org.apache.kafka.common.config.ConfigDef.parseType(ConfigDef.java:713)
kafka_1 | at org.apache.kafka.common.config.ConfigDef.parseValue(ConfigDef.java:460)
kafka_1 | at org.apache.kafka.common.config.ConfigDef.parse(ConfigDef.java:453)
kafka_1 | at org.apache.kafka.common.config.AbstractConfig.<init>(AbstractConfig.java:62)
kafka_1 | at kafka.server.KafkaConfig.<init>(KafkaConfig.scala:897)
kafka_1 | at kafka.server.KafkaConfig$.fromProps(KafkaConfig.scala:881)
kafka_1 | at kafka.server.KafkaConfig$.fromProps(KafkaConfig.scala:878)
kafka_1 | at kafka.server.KafkaServerStartable$.fromProps(KafkaServerStartable.scala:28)
kafka_1 | at kafka.Kafka$.main(Kafka.scala:82)
I have tried to reduce the version of kafka, I have used the version 1.0.1, 10.0.0 and 0.11.0.2, still receiving the same error
any suggestion how to make kafka works?
thanks in advance