I am monitoring a docker that runs ElasticSearch 6.2.3
Every day it dies, and I have not been able to figure out why...
I think it is memory, but where can I find proof of that ?
Running top on the host (Linux) I get
Virtual Mem = 53 Gig
Res = 26.7
right now ! that is...
docker docker inspect gives me this
"Memory": 34225520640,
"CpusetMems": "",
"KernelMemory": 0,
"MemoryReservation": 0,
"MemorySwap": 34225520640,
"MemorySwappiness": null,
"Name": "memlock",
The JVM params are
/usr/bin/java
-Xms1g
-Xmx1g
-XX:+UseConcMarkSweepGC
-XX:CMSInitiatingOccupancyFraction=75
-XX:+UseCMSInitiatingOccupancyOnly
-XX:+AlwaysPreTouch
-Xss1m
-Djava.awt.headless=true
-Dfile.encoding=UTF-8
-Djna.nosys=true
-XX:-OmitStackTraceInFastThrow
-Dio.netty.noUnsafe=true
-Dio.netty.noKeySetOptimization=true
-Dio.netty.recycler.maxCapacityPerThread=0
-Dlog4j.shutdownHookEnabled=false
-Dlog4j2.disable.jmx=true
-Djava.io.tmpdir=/usr/share/elasticsearch/tmp
-XX:+HeapDumpOnOutOfMemoryError
-XX:+PrintGCDetails
-XX:+PrintGCDateStamps
-XX:+PrintTenuringDistribution
-XX:+PrintGCApplicationStoppedTime
-Xloggc:logs/gc.log
-XX:+UseGCLogFileRotation
-XX:NumberOfGCLogFiles=32
-XX:GCLogFileSize=64m
-Des.cgroups.hierarchy.override=/
-Xms26112m
-Xmx26112m
-Des.path.home=/usr/share/elasticsearch
-Des.path.conf=/usr/share/elasticsearch/config
-cp
/usr/share/elasticsearch/lib/*
org.elasticsearch.bootstrap.Elasticsearch
and here is the log
[2019-01-30T07:14:01,278] [app-mesos-orders_api-2019.01.30/6l7Ga1I5T3qhLKYmWjQpRA] update_mapping [doc]
[2019-01-30T07:25:53,489] initializing ...
[2019-01-30T07:25:53,581] using [1] data paths, mounts [[/usr/share/elasticsearch/data (/dev/mapper/vg_tobias-lv_tobias)]], net usable_space [126.8gb], net total_space [199.8gb], types [xfs]
[2019-01-30T07:25:53,581] heap size [25.4gb], compressed ordinary object pointers [true]
[2019-01-30T07:26:12,390], node ID [-sJqW_h1TKy9c_Ka08In0A]
[2019-01-30T07:26:12,391] version[6.2.3], pid[1], build[c59ff00/2018-03-13T10:06:29.741383Z], OS[Linux/3.10.0-862.11.6.el7.x86_64/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/1.8.0_151/25.151-b12]
[2019-01-30T07:26:12,391] JVM arguments [-Xms1g, -Xmx1g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.io.tmpdir=/usr/share/elasticsearch/tmp, -XX:+HeapDumpOnOutOfMemoryError, -XX:+PrintGCDetails, -XX:+PrintGCDateStamps, -XX:+PrintTenuringDistribution, -XX:+PrintGCApplicationStoppedTime, -Xloggc:logs/gc.log, -XX:+UseGCLogFileRotation, -XX:NumberOfGCLogFiles=32, -XX:GCLogFileSize=64m, -Des.cgroups.hierarchy.override=/, -Xms26112m, -Xmx26112m, -Des.path.home=/usr/share/elasticsearch, -Des.path.conf=/usr/share/elasticsearch/config]
[2019-01-30T07:26:13,008] loaded module [aggs-matrix-stats]
[2019-01-30T07:26:13,008] loaded module [analysis-common]
[2019-01-30T07:26:13,008] loaded module [ingest-common]
[2019-01-30T07:26:13,008] loaded module [lang-expression]
[2019-01-30T07:26:13,008] loaded module [lang-mustache]
[2019-01-30T07:26:13,009] loaded module [lang-painless]
[2019-01-30T07:26:13,009] loaded module [mapper-extras]
[2019-01-30T07:26:13,009] loaded module [parent-join]
[2019-01-30T07:26:13,009] loaded module [percolator]
[2019-01-30T07:26:13,009] loaded module [rank-eval]
[2019-01-30T07:26:13,009] loaded module [reindex]
[2019-01-30T07:26:13,009] loaded module [repository-url]
[2019-01-30T07:26:13,009] loaded module [transport-netty4]
[2019-01-30T07:26:13,009] loaded module [tribe]
[2019-01-30T07:26:13,009] no plugins loaded
[2019-01-30T07:26:19,947] using discovery type [zen]
[2019-01-30T07:26:20,444] initialized
[2019-01-30T07:26:20,444] starting ...
[2019-01-30T07:26:20,600] publish_address {172.16.44.8:9300}, bound_addresses {172.17.0.14:9300}
[2019-01-30T07:26:21,507] bound or publishing to a non-loopback address, enforcing bootstrap checks
[2019-01-30T07:26:24,855] zen-disco-elected-as-master ([0] nodes joined), reason: new_master {elasticsearch-1}{-sJqW_h1TKy9c_Ka08In0A}{iGTgehBjQ3yPRm9nlTLbYw}{172.16.44.8}{172.16.44.8:9300}{rack=rack1}
[2019-01-30T07:26:24,861] new_master {elasticsearch-1}{-sJqW_h1TKy9c_Ka08In0A}{iGTgehBjQ3yPRm9nlTLbYw}{172.16.44.8}{172.16.44.8:9300}{rack=rack1}, reason: apply cluster state (from master [master {elasticsearch-1}{-sJqW_h1TKy9c_Ka08In0A}{iGTgehBjQ3yPRm9nlTLbYw}{172.16.44.8}{172.16.44.8:9300}{rack=rack1} committed version [1] source [zen-disco-elected-as-master ([0] nodes joined)]])
[2019-01-30T07:26:24,881] publish_address {172.16.44.8:9200}, bound_addresses {172.17.0.14:9200}
[2019-01-30T07:26:24,881] started
[2019-01-30T07:26:37,033] recovered [1535] indices into cluster_state
I am thinking that the Docker container runs out of memory and dies silently
How can I prove this ? and what can I do to solve this ?
Related
I just used the below command to check the available disk space in my node.
Input: (Dev console)
GET /_cat/nodes?v&h=id,diskTotal,diskUsed,diskAvail,diskUsedPercent
Output:
id diskTotal diskUsed diskAvail diskUsedPercent
vcgA 9.6gb 8.6gb 960.4mb 90.26
I have a single node and It shows 960.4mb as available space. Is it possible to increase to 2gb ? and Can anyone tell me how can I achieve this?
Also, I just wonder that I don't have any index created in cluster but not sure how does it occupied 8.6gb space.
Also I added the below config properties in elasticsearch.yml file
cluster.routing.allocation.disk.threshold_enabled: true
cluster.routing.allocation.disk.watermark.flood_stage: 20gb
cluster.routing.allocation.disk.watermark.low: 30gb
cluster.routing.allocation.disk.watermark.high: 25gb
Log for elastic search:
sudo docker logs c8eadd9d92f6
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
[2019-12-08T18:16:10,968][INFO ][o.e.e.NodeEnvironment ] [EWlGsNg] using [1] data paths, mounts [[/ (overlay)]], net usable_space [959.1mb], net total_space [9.6gb], types [overlay]
[2019-12-08T18:16:10,972][INFO ][o.e.e.NodeEnvironment ] [EWlGsNg] heap size [9.8gb], compressed ordinary object pointers [true]
[2019-12-08T18:16:10,975][INFO ][o.e.n.Node ] [EWlGsNg] node name derived from node ID [EWlGsNg9R4ChiOSACmSE5Q]; set [node.name] to override
[2019-12-08T18:16:10,975][INFO ][o.e.n.Node ] [EWlGsNg] version[6.6.0], pid[1], build[oss/tar/a9861f4/2019-01-24T11:27:09.439740Z], OS[Linux/4.15.0-1044-gcp/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/11.0.1/11.0.1+13]
[2019-12-08T18:16:10,975][INFO ][o.e.n.Node ] [EWlGsNg] JVM arguments [-Xms1g, -Xmx1g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.io.tmpdir=/tmp/elasticsearch-14249827840908502393, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=data, -XX:ErrorFile=logs/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=logs/gc.log:utctime,pid,tags:filecount=32,filesize=64m, -Djava.locale.providers=COMPAT, -XX:UseAVX=2, -Des.cgroups.hierarchy.override=/, -Xmx10g, -Xms10g, -Des.path.home=/usr/share/elasticsearch, -Des.path.conf=/usr/share/elasticsearch/config, -Des.distribution.flavor=oss, -Des.distribution.type=tar]
[2019-12-08T18:16:11,844][INFO ][o.e.p.PluginsService ] [EWlGsNg] loaded module [aggs-matrix-stats]
[2019-12-08T18:16:11,845][INFO ][o.e.p.PluginsService ] [EWlGsNg] loaded module [analysis-common]
[2019-12-08T18:16:11,845][INFO ][o.e.p.PluginsService ] [EWlGsNg] loaded module [ingest-common]
[2019-12-08T18:16:11,845][INFO ][o.e.p.PluginsService ] [EWlGsNg] loaded module [lang-expression]
[2019-12-08T18:16:11,845][INFO ][o.e.p.PluginsService ] [EWlGsNg] loaded module [lang-mustache]
[2019-12-08T18:16:11,845][INFO ][o.e.p.PluginsService ] [EWlGsNg] loaded module [lang-painless]
[2019-12-08T18:16:11,845][INFO ][o.e.p.PluginsService ] [EWlGsNg] loaded module [mapper-extras]
[2019-12-08T18:16:11,845][INFO ][o.e.p.PluginsService ] [EWlGsNg] loaded module [parent-join]
[2019-12-08T18:16:11,846][INFO ][o.e.p.PluginsService ] [EWlGsNg] loaded module [percolator]
[2019-12-08T18:16:11,846][INFO ][o.e.p.PluginsService ] [EWlGsNg] loaded module [rank-eval]
[2019-12-08T18:16:11,846][INFO ][o.e.p.PluginsService ] [EWlGsNg] loaded module [reindex]
[2019-12-08T18:16:11,846][INFO ][o.e.p.PluginsService ] [EWlGsNg] loaded module [repository-url]
[2019-12-08T18:16:11,846][INFO ][o.e.p.PluginsService ] [EWlGsNg] loaded module [transport-netty4]
[2019-12-08T18:16:11,846][INFO ][o.e.p.PluginsService ] [EWlGsNg] loaded module [tribe]
[2019-12-08T18:16:11,847][INFO ][o.e.p.PluginsService ] [EWlGsNg] loaded plugin [ingest-geoip]
[2019-12-08T18:16:11,847][INFO ][o.e.p.PluginsService ] [EWlGsNg] loaded plugin [ingest-user-agent]
[2019-12-08T18:16:15,363][INFO ][o.e.d.DiscoveryModule ] [EWlGsNg] using discovery type [single-node] and host providers [settings]
[2019-12-08T18:16:15,919][INFO ][o.e.n.Node ] [EWlGsNg] initialized
[2019-12-08T18:16:15,919][INFO ][o.e.n.Node ] [EWlGsNg] starting ...
[2019-12-08T18:16:16,619][INFO ][o.e.t.TransportService ] [EWlGsNg] publish_address {192.20.9.2:9300}, bound_addresses {0.0.0.0:9300}
[2019-12-08T18:16:16,747][INFO ][o.e.h.n.Netty4HttpServerTransport] [EWlGsNg] publish_address {192.20.9.2:9200}, bound_addresses {0.0.0.0:9200}
[2019-12-08T18:16:16,747][INFO ][o.e.n.Node ] [EWlGsNg] started
[2019-12-08T18:16:16,781][INFO ][o.e.g.GatewayService ] [EWlGsNg] recovered [0] indices into cluster_state
[2019-12-08T18:16:16,931][INFO ][o.e.m.j.JvmGcMonitorService] [EWlGsNg] [gc][1] overhead, spent [447ms] collecting in the last [1s]
[2019-12-08T18:16:17,184][INFO ][o.e.c.m.MetaDataCreateIndexService] [EWlGsNg] [.kibana_1] creating index, cause [api], templates [], shards [1]/[1], mappings [doc]
[2019-12-08T18:16:17,195][INFO ][o.e.c.r.a.AllocationService] [EWlGsNg] updating number_of_replicas to [0] for indices [.kibana_1]
[2019-12-08T18:16:17,468][INFO ][o.e.c.r.a.AllocationService] [EWlGsNg] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.kibana_1][0]] ...]).
Note: I'm using ELK 6.6.0 version
Can someone please help me on this. I stuck and not sure how can I achieve this?
If you ran your elastic search via docker, you can increase your total disk space via Docker Desktop -> Settings -> Resources -> Virtual disk limit. Doing this should automatically increase the disk.total.
If this isn't achievable, I suggest you have a read of the Official ElasticSearch docs on Increase the disk capacity of data nodes for further information.
I've downloaded scm-server 2 from Jenkins to run on my dedicated source code server. Unfortunately it won't start for some reason. It works on my desktop computer so probably there is just a package missing.
Using Ubuntu 18.04 with apache2 and jetty9 packages installed, which might be relevant.
Any ideas?
~/scm-server/bin$ ./scm-server
2019-09-16 10:33:58.689:INFO::main: Logging initialized #478ms to org.eclipse.jetty.util.log.StdErrLog
2019-09-16 10:33:59.595:INFO:oejs.Server:main: jetty-9.4.14.v20181114; built: 2018-11-14T21:20:31.478Z; git: c4550056e785fb5665914545889f21dc136ad9e6; jvm 12.0.2+10
2019-09-16 10:34:02.442:INFO:oejw.StandardDescriptorProcessor:main: NO JSP Support for /scm, did not find org.eclipse.jetty.jsp.JettyJspServlet
2019-09-16 10:34:02.514:INFO:oejs.session:main: DefaultSessionIdManager workerName=node0
2019-09-16 10:34:02.516:INFO:oejs.session:main: No SessionScavenger set, using defaults
2019-09-16 10:34:02.525:INFO:oejs.session:main: node0 Scavenging every 600000ms
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by se.jiderhamn.classloader.leak.prevention.ClassLoaderLeakPreventor (file:/home/klarre/scm-server/work/scm/webapp/WEB-INF/lib/classloader-leak-prevention-core-2.7.0.jar) to method java.lang.ClassLoader.isAncestor(java.lang.ClassLoader)
WARNING: Please consider reporting this to the maintainers of se.jiderhamn.classloader.leak.prevention.ClassLoaderLeakPreventor
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
2019-09-16 10:34:04.105 [main] [ ] INFO sonia.scm.lifecycle.BootstrapContextFilter - register for restart events
2019-09-16 10:34:04.125 [main] [ ] INFO sonia.scm.event.LegmanScmEventBus - create new event bus ScmEventBus-1
2019-09-16 10:34:04.319:WARN:oejw.WebAppContext:main: Failed startup of context o.e.j.w.WebAppContext#4dc8caa7{SCM-Manager 2.0.0-SNAPSHOT,/scm,file:///home/klarre/scm-server/work/scm/webapp/,UNAVAILABLE}{/home/klarre/scm-server/var/webapp/scm-webapp.war}
java.util.ServiceConfigurationError: sonia.scm.event.ScmEventBus: Provider sonia.scm.event.LegmanScmEventBus could not be instantiated
at java.base/java.util.ServiceLoader.fail(ServiceLoader.java:583)
at java.base/java.util.ServiceLoader$ProviderImpl.newInstance(ServiceLoader.java:805)
at java.base/java.util.ServiceLoader$ProviderImpl.get(ServiceLoader.java:723)
at java.base/java.util.ServiceLoader$3.next(ServiceLoader.java:1395)
at sonia.scm.util.ServiceUtil.getService(ServiceUtil.java:99)
at sonia.scm.event.ScmEventBus.getInstance(ScmEventBus.java:89)
at sonia.scm.lifecycle.BootstrapContextFilter.initializeContext(BootstrapContextFilter.java:75)
at sonia.scm.lifecycle.BootstrapContextFilter.init(BootstrapContextFilter.java:68)
at org.eclipse.jetty.servlet.FilterHolder.initialize(FilterHolder.java:136)
at org.eclipse.jetty.servlet.ServletHandler.lambda$initialize$0(ServletHandler.java:750)
at java.base/java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:948)
at java.base/java.util.stream.Streams$ConcatSpliterator.forEachRemaining(Streams.java:734)
at java.base/java.util.stream.Streams$ConcatSpliterator.forEachRemaining(Streams.java:734)
at java.base/java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:658)
at org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:744)
at org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:368)
at org.eclipse.jetty.webapp.WebAppContext.startWebapp(WebAppContext.java:1497)
at org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1459)
at org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:852)
at org.eclipse.jetty.servlet.ServletContextHandler.doStart(ServletContextHandler.java:278)
at org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:545)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:138)
at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:117)
at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:113)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:138)
at org.eclipse.jetty.server.Server.start(Server.java:415)
at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:108)
at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:113)
at org.eclipse.jetty.server.Server.doStart(Server.java:382)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at sonia.scm.server.ScmServer.init(ScmServer.java:139)
at sonia.scm.server.ScmServer.run(ScmServer.java:100)
at sonia.scm.server.ScmServerDaemon.main(ScmServerDaemon.java:62)
Caused by:
java.util.ServiceConfigurationError: com.github.legman.HandlerFindingStrategy: service type not accessible to unnamed module #2c78d320
at java.base/java.util.ServiceLoader.fail(ServiceLoader.java:590)
at java.base/java.util.ServiceLoader.checkCaller(ServiceLoader.java:570)
at java.base/java.util.ServiceLoader.<init>(ServiceLoader.java:505)
at java.base/java.util.ServiceLoader.load(ServiceLoader.java:1647)
at com.github.legman.internal.ServiceLocator.locate(ServiceLocator.java:67)
at com.github.legman.internal.ServiceLocator.locate(ServiceLocator.java:92)
at com.github.legman.EventBus.<init>(EventBus.java:152)
at sonia.scm.event.LegmanScmEventBus.create(LegmanScmEventBus.java:79)
at sonia.scm.event.LegmanScmEventBus.<init>(LegmanScmEventBus.java:73)
at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.base/java.lang.reflect.Constructor.newInstanceWithCaller(Constructor.java:500)
at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:481)
at java.base/java.util.ServiceLoader$ProviderImpl.newInstance(ServiceLoader.java:781)
at java.base/java.util.ServiceLoader$ProviderImpl.get(ServiceLoader.java:723)
at java.base/java.util.ServiceLoader$3.next(ServiceLoader.java:1395)
at sonia.scm.util.ServiceUtil.getService(ServiceUtil.java:99)
at sonia.scm.event.ScmEventBus.getInstance(ScmEventBus.java:89)
at sonia.scm.lifecycle.BootstrapContextFilter.initializeContext(BootstrapContextFilter.java:75)
at sonia.scm.lifecycle.BootstrapContextFilter.init(BootstrapContextFilter.java:68)
at org.eclipse.jetty.servlet.FilterHolder.initialize(FilterHolder.java:136)
at org.eclipse.jetty.servlet.ServletHandler.lambda$initialize$0(ServletHandler.java:750)
at java.base/java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:948)
at java.base/java.util.stream.Streams$ConcatSpliterator.forEachRemaining(Streams.java:734)
at java.base/java.util.stream.Streams$ConcatSpliterator.forEachRemaining(Streams.java:734)
at java.base/java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:658)
at org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:744)
at org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:368)
at org.eclipse.jetty.webapp.WebAppContext.startWebapp(WebAppContext.java:1497)
at org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1459)
at org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:852)
at org.eclipse.jetty.servlet.ServletContextHandler.doStart(ServletContextHandler.java:278)
at org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:545)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:138)
at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:117)
at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:113)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:138)
at org.eclipse.jetty.server.Server.start(Server.java:415)
at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:108)
at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:113)
at org.eclipse.jetty.server.Server.doStart(Server.java:382)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at sonia.scm.server.ScmServer.init(ScmServer.java:139)
at sonia.scm.server.ScmServer.run(ScmServer.java:100)
at sonia.scm.server.ScmServerDaemon.main(ScmServerDaemon.java:62)
2019-09-16 10:34:04.377:INFO:oejw.StandardDescriptorProcessor:main: NO JSP Support for /, did not find org.eclipse.jetty.jsp.JettyJspServlet
2019-09-16 10:34:04.382:INFO:oejsh.ContextHandler:main: Started o.e.j.w.WebAppContext#53d102a2{/,[file:///home/klarre/scm-server/var/webapp/docroot/],AVAILABLE}
2019-09-16 10:34:04.459:INFO:oejs.AbstractConnector:main: Started ServerConnector#18c49a9f{HTTP/1.1,[http/1.1]}{0.0.0.0:8088}
2019-09-16 10:34:04.460:INFO:oejs.Server:main: Started #6303ms
Changing from openjdk-12 to openjdk-8 solved the problem.
Took some time, but from version 2.0.0-rc4 on, SCM-Manager is compatible with Java 9 and above.
Here you can find out how to get the version: https://www.scm-manager.org/uncategorized/scm-manager-2-0-0-rc4/
I'm trying to start the docker container. I am using docker-elk.yml file to generate ELK containers.Elasticsearch and Kibana Containers are working fine.But for Logstash,it is starting and going into container bash but after sometimes it stops automatically.
Logs of container:
[2019-04-11T08:48:26,882][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"6.6.0"}
[2019-04-11T08:48:33,497][INFO ][logstash.pipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[2019-04-11T08:48:34,062][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch:9200/]}}
[2019-04-11T08:48:34,310][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://elasticsearch:9200/"}
[2019-04-11T08:48:34,409][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
[2019-04-11T08:48:34,415][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the type event field won't be used to determine the document _type {:es_version=>6}
[2019-04-11T08:48:34,469][INFO ][logstash.outputs.elasticsearch]
New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//elasticsearch:9200"]}
[2019-04-11T08:48:34,486][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2019-04-11T08:48:34,503][INFO ][logstash.outputs.elasticsearch]
Attempting to install template {:manage_template=>{"template"=>"logstash-", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"#timestamp"=>{"type"=>"date"}, "#version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2019-04-11T08:48:34,960][INFO ][logstash.inputs.beats ] Beats inputs: Starting input listener {:address=>"0.0.0.0:5000"}
[2019-04-11T08:48:34,985][INFO ][logstash.pipeline ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#"}
[2019-04-11T08:48:35,077][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2019-04-11T08:48:35,144][INFO ][org.logstash.beats.Server] Starting server on port: 5000
[2019-04-11T08:48:35,499][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2019-04-11T08:48:50,591][INFO ][logstash.outputs.file ] Opening file {:path=>"/usr/share/logstash/output.log"}
[2019-04-11T13:16:51,947][WARN ][logstash.runner ] SIGTERM received. Shutting down.
[2019-04-11T13:16:56,498][INFO ][logstash.pipeline ] Pipeline has terminated {:pipeline_id=>"main", :thread=>"#"}
Does it try to require a relative path? That's been removed in Ruby 1.9.
uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/rubygems/core_ext/kernel_require.rb:59:in `require':
It seems your ruby installation is missing psych (for YAML output).
To eliminate this warning, please install libyaml and reinstall your ruby.
[ERROR] 2019-04-11 14:18:02.058 [main] Logstash - java.lang.IllegalStateException: Logstash stopped processing because of an error:
(GemspecError) There was a LoadError while loading logstash-core.gemspec:
load error: psych -- java.lang.RuntimeException: BUG: we can not copy embedded jar to temp directoryoes it try to require a relative path? That's been removed in Ruby 1.9.
uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/rubygems/core_ext/kernel_require.rb:59:in `require':
It seems your ruby installation is missing psych (for YAML output).
To eliminate this warning, please install libyaml and reinstall your ruby.
[ERROR] 2019-04-11 13:42:01.450 [main]
Logstash - java.lang.IllegalStateException: Logstash stopped processing because of an error: (GemspecError) There was a LoadError while loading logstash-core.gemspec: load error: psych -- java.lang.RuntimeException: BUG: we can not copy embedded jar to temp directory
I have tried to remove tmp folder in container.But it is not working.
New to docker and ELK stack.
I referred to this doc, for running elastic search in docker.
Docker container command says, elastic search is up in 9200 and 9300.,
CONTAINER ID : ef87e2bccee9
IMAGE: docker.elastic.co/elasticsearch/elasticsearch:6.6.1
CREATED: 18 minutes ago
STATUS: Up 18 minutes
PORTS: 0.0.0.0:9200->9200/tcp, 0.0.0.0:9300->9300/tcp
NAMES: dreamy_roentgen
And elastic search logs says
C:\Windows\system32>docker run -p 9200:9200 -p 9300:9300 -e
"discovery.type=single-node"
docker.elastic.co/elasticsearch/elasticsearch:6.6.1
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
OpenJDK 64-Bit Server VM warning: UseAVX=2 is not supported on this CPU, setting it to UseAVX=0
[2019-02-23T04:18:00,510][INFO ][o.e.e.NodeEnvironment ] [GKy7sPe] using [1] data paths, mounts [[/ (overlay)]], net usable_space [52.3gb], net total_space [58.8gb], types [overlay]
[2019-02-23T04:18:00,542][INFO ][o.e.e.NodeEnvironment ] [GKy7sPe] heap size [1007.3mb], compressed ordinary object pointers [true]
[2019-02-23T04:18:00,561][INFO ][o.e.n.Node ] [GKy7sPe] node name derived from node ID [GKy7sPeERPaWgzMLoxWQFg]; set [node.name] to override
[2019-02-23T04:18:00,589][INFO ][o.e.n.Node ] [GKy7sPe] version[6.6.1], pid[1], build[default/tar/1fd8f69/2019-02-13T17:10:04.160291Z], OS[Linux/4.9.125-linuxkit/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/11.0.1/11.0.1+13]
[2019-02-23T04:18:00,592][INFO ][o.e.n.Node ] [GKy7sPe] JVM arguments [-Xms1g, -Xmx1g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.io.tmpdir=/tmp/elasticsearch-10308254911807837384, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=data, -XX:ErrorFile=logs/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=logs/gc.log:utctime,pid,tags:filecount=32,filesize=64m, -Djava.locale.providers=COMPAT, -XX:UseAVX=2, -Des.cgroups.hierarchy.override=/, -Des.path.home=/usr/share/elasticsearch, -Des.path.conf=/usr/share/elasticsearch/config, -Des.distribution.flavor=default, -Des.distribution.type=tar]
[2019-02-23T04:18:15,059][INFO ][o.e.p.PluginsService ] [GKy7sPe] loaded module [aggs-matrix-stats]
[2019-02-23T04:18:15,059][INFO ][o.e.p.PluginsService ] [GKy7sPe] loaded module [analysis-common]
[2019-02-23T04:18:15,061][INFO ][o.e.p.PluginsService ] [GKy7sPe] loaded module [ingest-common]
[2019-02-23T04:18:15,063][INFO ][o.e.p.PluginsService ] [GKy7sPe] loaded module [lang-expression]
[2019-02-23T04:18:15,064][INFO ][o.e.p.PluginsService ] [GKy7sPe] loaded module [lang-mustache]
[2019-02-23T04:18:15,068][INFO ][o.e.p.PluginsService ] [GKy7sPe] loaded module [lang-painless]
[2019-02-23T04:18:15,068][INFO ][o.e.p.PluginsService ] [GKy7sPe] loaded module [mapper-extras]
[2019-02-23T04:18:15,070][INFO ][o.e.p.PluginsService ] [GKy7sPe] loaded module [parent-join]
[2019-02-23T04:18:15,071][INFO ][o.e.p.PluginsService ] [GKy7sPe] loaded module [percolator]
[2019-02-23T04:18:15,071][INFO ][o.e.p.PluginsService ] [GKy7sPe] loaded module [rank-eval]
[2019-02-23T04:18:15,072][INFO ][o.e.p.PluginsService ] [GKy7sPe] loaded module [reindex]
[2019-02-23T04:18:15,072][INFO ][o.e.p.PluginsService ] [GKy7sPe] loaded module [repository-url]
[2019-02-23T04:18:15,072][INFO ][o.e.p.PluginsService ] [GKy7sPe] loaded module [transport-netty4]
[2019-02-23T04:18:15,074][INFO ][o.e.p.PluginsService ] [GKy7sPe] loaded module [tribe]
[2019-02-23T04:18:15,087][INFO ][o.e.p.PluginsService ] [GKy7sPe] loaded module [x-pack-ccr]
[2019-02-23T04:18:15,088][INFO ][o.e.p.PluginsService ] [GKy7sPe] loaded module [x-pack-core]
[2019-02-23T04:18:15,089][INFO ][o.e.p.PluginsService ] [GKy7sPe] loaded module [x-pack-deprecation]
[2019-02-23T04:18:15,091][INFO ][o.e.p.PluginsService ] [GKy7sPe] loaded module [x-pack-graph]
[2019-02-23T04:18:15,094][INFO ][o.e.p.PluginsService ] [GKy7sPe] loaded module [x-pack-ilm]
[2019-02-23T04:18:15,096][INFO ][o.e.p.PluginsService ] [GKy7sPe] loaded module [x-pack-logstash]
[2019-02-23T04:18:15,097][INFO ][o.e.p.PluginsService ] [GKy7sPe] loaded module [x-pack-ml]
[2019-02-23T04:18:15,098][INFO ][o.e.p.PluginsService ] [GKy7sPe] loaded module [x-pack-monitoring]
[2019-02-23T04:18:15,099][INFO ][o.e.p.PluginsService ] [GKy7sPe] loaded module [x-pack-rollup]
[2019-02-23T04:18:15,100][INFO ][o.e.p.PluginsService ] [GKy7sPe] loaded module [x-pack-security]
[2019-02-23T04:18:15,102][INFO ][o.e.p.PluginsService ] [GKy7sPe] loaded module [x-pack-sql]
[2019-02-23T04:18:15,102][INFO ][o.e.p.PluginsService ] [GKy7sPe] loaded module [x-pack-upgrade]
[2019-02-23T04:18:15,102][INFO ][o.e.p.PluginsService ] [GKy7sPe] loaded module [x-pack-watcher]
[2019-02-23T04:18:15,105][INFO ][o.e.p.PluginsService ] [GKy7sPe] loaded plugin [ingest-geoip]
[2019-02-23T04:18:15,105][INFO ][o.e.p.PluginsService ] [GKy7sPe] loaded plugin [ingest-user-agent]
[2019-02-23T04:18:44,704][INFO ][o.e.x.s.a.s.FileRolesStore] [GKy7sPe] parsed [0] roles from file [/usr/share/elasticsearch/config/roles.yml]
[2019-02-23T04:18:48,619][INFO ][o.e.x.m.p.l.CppLogMessageHandler] [GKy7sPe] [controller/87] [Main.cc#109] controller (64 bit): Version 6.6.1 (Build a033f1b9679cab) Copyright (c) 2019 Elasticsearch BV
[2019-02-23T04:18:53,554][INFO ][o.e.d.DiscoveryModule ] [GKy7sPe] using discovery type [single-node] and host providers [settings]
[2019-02-23T04:18:57,834][INFO ][o.e.n.Node ] [GKy7sPe] initialized
[2019-02-23T04:18:57,836][INFO ][o.e.n.Node ] [GKy7sPe] starting ...
[2019-02-23T04:18:59,060][INFO ][o.e.t.TransportService ] [GKy7sPe] publish_address {172.17.0.2:9300}, bound_addresses {0.0.0.0:9300}
[2019-02-23T04:18:59,423][INFO ][o.e.h.n.Netty4HttpServerTransport] [GKy7sPe] publish_address {172.17.0.2:9200}, bound_addresses {0.0.0.0:9200}
[2019-02-23T04:18:59,431][INFO ][o.e.n.Node ] [GKy7sPe] started
[2019-02-23T04:19:00,187][WARN ][o.e.x.s.a.s.m.NativeRoleMappingStore] [GKy7sPe] Failed to clear cache for realms [[]]
[2019-02-23T04:19:00,657][INFO ][o.e.g.GatewayService ] [GKy7sPe] recovered [0] indices into cluster_state
[2019-02-23T04:19:02,610][INFO ][o.e.c.m.MetaDataIndexTemplateService] [GKy7sPe] adding template [.watch-history-9] for index patterns [.watcher-history-9*]
[2019-02-23T04:19:02,960][INFO ][o.e.c.m.MetaDataIndexTemplateService] [GKy7sPe] adding template [.watches] for index patterns [.watches*]
[2019-02-23T04:19:03,406][INFO ][o.e.c.m.MetaDataIndexTemplateService] [GKy7sPe] adding template [.triggered_watches] for index patterns [.triggered_watches*]
[2019-02-23T04:19:03,798][INFO ][o.e.c.m.MetaDataIndexTemplateService] [GKy7sPe] adding template [.monitoring-logstash] for index patterns [.monitoring-logstash-6-*]
[2019-02-23T04:19:04,277][INFO ][o.e.c.m.MetaDataIndexTemplateService] [GKy7sPe] adding template [.monitoring-es] for index patterns [.monitoring-es-6-*]
[2019-02-23T04:19:04,568][INFO ][o.e.c.m.MetaDataIndexTemplateService] [GKy7sPe] adding template [.monitoring-alerts] for index patterns [.monitoring-alerts-6]
[2019-02-23T04:19:04,944][INFO ][o.e.c.m.MetaDataIndexTemplateService] [GKy7sPe] adding template [.monitoring-beats] for index patterns [.monitoring-beats-6-*]
[2019-02-23T04:19:05,265][INFO ][o.e.c.m.MetaDataIndexTemplateService] [GKy7sPe] adding template [.monitoring-kibana] for index patterns [.monitoring-kibana-6-*]
[2019-02-23T04:19:06,992][INFO ][o.e.l.LicenseService ] [GKy7sPe] license [c7497c27-896c-441b-82c5-c33bc011f901] mode [basic] - valid
When I tried localhost:9200 in my browser, it keeps on waiting for response but elastic search is not responding.
Could someone share some inputs here?
Elevating #val 's comment to an answer because I was facing the same issue and the problem is indeed that docker binds to the external IP address and doesn't bind to localhost at all. After that comment, I tried to ping my cluster from my laptop and I could and indeed pinging 0.0.0.0:9200/ instead of 127.0.0.1:9200 worked from the server.
When I run RAILS_ENV=production rake sunspot:solr:run, Solr starts as expected and the log looks something like this:
0 INFO (main) [ ] o.e.j.u.log Logging initialized #613ms
355 INFO (main) [ ] o.e.j.s.Server jetty-9.2.11.v20150529
380 WARN (main) [ ] o.e.j.s.h.RequestLogHandler !RequestLog
383 INFO (main) [ ] o.e.j.d.p.ScanningAppProvider Deployment monitor [file:/usr/local/rvm/gems/ruby-2.2.1/gems/sunspot_solr- 2.2.3/solr/server/contexts/] at interval 0
1392 INFO (main) [ ] o.e.j.w.StandardDescriptorProcessor NO JSP Support for /solr, did not find org.apache.jasper.servlet.JspServlet
1437 WARN (main) [ ] o.e.j.s.SecurityHandler ServletContext#o.e.j.w.WebAppContext#1f3ff9d4{/solr,file:/usr/local/rvm/gems/ruby-2.2.1/gems/sunspot_solr-2.2.3/solr/server/solr-webapp/webapp/,STARTING}{/usr/local/rvm/gems/ruby-2.2.1/gems/sunspot_solr-2.2.3/solr/server/solr-webapp/webapp} has uncovered http methods for path: /
1456 INFO (main) [ ] o.a.s.s.SolrDispatchFilter SolrDispatchFilter.init(): WebAppClassLoader=879601585#346da7b1
1487 INFO (main) [ ] o.a.s.c.SolrResourceLoader JNDI not configured for solr (NoInitialContextEx)
1488 INFO (main) [ ] o.a.s.c.SolrResourceLoader using system property solr.solr.home: /usr/local/rvm/gems/ruby-2.2.1/gems/sunspot_solr-2.2.3/solr/server/solr
1491 INFO (main) [ ] o.a.s.c.SolrResourceLoader new SolrResourceLoader for directory: '/usr/local/rvm/gems/ruby-2.2.1/gems/sunspot_solr-2.2.3/solr/server/solr/'
1738 INFO (main) [ ] o.a.s.c.SolrXmlConfig Loading container configuration from /usr/local/rvm/gems/ruby-2.2.1/gems/sunspot_solr-2.2.3/solr/server/solr/solr.xml
1848 INFO (main) [ ] o.a.s.c.CoresLocator Config-defined core root directory: /usr/local/rvm/gems/ruby-2.2.1/gems/sunspot_solr-2.2.3/solr/server/solr
1882 INFO (main) [ ] o.a.s.c.CoreContainer New CoreContainer 394200281
1882 INFO (main) [ ] o.a.s.c.CoreContainer Loading cores into CoreContainer [instanceDir=/usr/local/rvm/gems/ruby-2.2.1/gems/sunspot_solr-2.2.3/solr/server/solr/]
1883 INFO (main) [ ] o.a.s.c.CoreContainer loading shared library: /usr/local/rvm/gems/ruby-2.2.1/gems/sunspot_solr-2.2.3/solr/server/solr/lib
1883 WARN (main) [ ] o.a.s.c.SolrResourceLoader Can't find (or read) directory to add to classloader: lib (resolved as: /usr/local/rvm/gems/ruby-2.2.1/gems/sunspot_solr-2.2.3/solr/server/solr/lib).
1905 INFO (main) [ ] o.a.s.h.c.HttpShardHandlerFactory created with socketTimeout : 600000,connTimeout : 60000,maxConnectionsPerHost : 20,maxConnections : 10000,corePoolSize : 0,maximumPoolSize : 2147483647,maxThreadIdleTime : 5,sizeOfQueue : -1,fairnessPolicy : false,useRetries : false,
2333 INFO (main) [ ] o.a.s.u.UpdateShardHandler Creating UpdateShardHandler HTTP client with params: socketTimeout=600000&connTimeout=60000&retry=true
2338 INFO (main) [ ] o.a.s.l.LogWatcher SLF4J impl is org.slf4j.impl.Log4jLoggerFactory
2339 INFO (main) [ ] o.a.s.l.LogWatcher Registering Log Listener [Log4j (org.slf4j.impl.Log4jLoggerFactory)]
2341 INFO (main) [ ] o.a.s.c.CoreContainer Security conf doesn't exist. Skipping setup for authorization module.
2341 INFO (main) [ ] o.a.s.c.CoreContainer No authentication plugin used.
2379 INFO (main) [ ] o.a.s.c.CoresLocator Looking for core definitions underneath /usr/local/rvm/gems/ruby-2.2.1/gems/sunspot_solr-2.2.3/solr/server/solr
2385 INFO (main) [ ] o.a.s.c.CoresLocator Found 0 core definitions
2389 INFO (main) [ ] o.a.s.s.SolrDispatchFilter user.dir=/usr/local/rvm/gems/ruby-2.2.1/gems/sunspot_solr-2.2.3/solr/server
2390 INFO (main) [ ] o.a.s.s.SolrDispatchFilter SolrDispatchFilter.init() done
2435 INFO (main) [ ] o.e.j.s.h.ContextHandler Started o.e.j.w.WebAppContext#1f3ff9d4{/solr,file:/usr/local/rvm/gems/ruby-2.2.1/gems/sunspot_solr-2.2.3/solr/server/solr-webapp/webapp/,AVAILABLE}{/usr/local/rvm/gems/ruby-2.2.1/gems/sunspot_solr-2.2.3/solr/server/solr-webapp/webapp}
2458 INFO (main) [ ] o.e.j.s.ServerConnector Started ServerConnector#649ad901{HTTP/1.1}{0.0.0.0:8080}
2458 INFO (main) [ ] o.e.j.s.Server Started #3074ms
6677 INFO (qtp207710485-20) [ ] o.a.s.s.SolrDispatchFilter [admin] webapp=null path=/admin/cores params={indexInfo=false&_=1453515549920&wt=json} status=0 QTime=58
6796 INFO (qtp207710485-19) [ ] o.a.s.s.SolrDispatchFilter [admin] webapp=null path=/admin/info/system params={_=1453515550069&wt=json} status=0 QTime=23
I can also access localhost:8080/solr.
However, when I run RAILS_ENV=production rake sunspot:solr:start --trace, I get a normal trace:
** Invoke sunspot:solr:start (first_time)
** Invoke environment (first_time)
** Execute environment
** Execute sunspot:solr:start
Removing stale PID file at /home/rails/webapp/solr/pids/production/sunspot-solr-production.pid
Successfully started Solr ...
Yet I can't access localhost:8080 (it gives me an ERR_CONNECTION_REFUSED), and when I try to do anything else involving Solr, I get an Errno::ECONNREFUSED: Connection refused error.
For example, when I run RAILS_ENV=production rake sunspot:solr:reindex after starting Solr, I get:
Errno::ECONNREFUSED: Connection refused - {:data=>"<?xml version=\"1.0\" encoding=\"UTF-8\"?><delete><query>type:Piece</query></delete>", :headers=>{"Content-Type"=>"text/xml"}, :method=>:post, :params=>{:wt=>:ruby}, :query=>"wt=ruby", :path=>"update", :uri=>#<URI::HTTP http://localhost:8080/solr/update?wt=ruby>, :open_timeout=>nil, :read_timeout=>nil, :retry_503=>nil, :retry_after_limit=>nil}
....
Errno::ECONNREFUSED: Connection refused - connect(2) for "localhost" port 8080
My sunspot.yml file looks like this, after this:
production:
solr:
hostname: localhost
port: 8080 #tomcat defaults to port 8080
path: /solr
log_level: WARNING
solr_home: solr
The Solr server was working fine before. The connection errors started when I tried to seed the production db and received an EOFError: end of file reached. I can post the full trace for that error if needed.
Please help!
I had a similar situation happen. Sunspot would say that Solr had started successfully, but it never actually did start. Turns out I was using Java 1.6 and installing JDK 8 fixed it for me.