ElasticSearch is up and running in docker but not responding to request - docker

New to docker and ELK stack.
I referred to this doc, for running elastic search in docker.
Docker container command says, elastic search is up in 9200 and 9300.,
CONTAINER ID : ef87e2bccee9
IMAGE: docker.elastic.co/elasticsearch/elasticsearch:6.6.1
CREATED: 18 minutes ago
STATUS: Up 18 minutes
PORTS: 0.0.0.0:9200->9200/tcp, 0.0.0.0:9300->9300/tcp
NAMES: dreamy_roentgen
And elastic search logs says
C:\Windows\system32>docker run -p 9200:9200 -p 9300:9300 -e
"discovery.type=single-node"
docker.elastic.co/elasticsearch/elasticsearch:6.6.1
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
OpenJDK 64-Bit Server VM warning: UseAVX=2 is not supported on this CPU, setting it to UseAVX=0
[2019-02-23T04:18:00,510][INFO ][o.e.e.NodeEnvironment ] [GKy7sPe] using [1] data paths, mounts [[/ (overlay)]], net usable_space [52.3gb], net total_space [58.8gb], types [overlay]
[2019-02-23T04:18:00,542][INFO ][o.e.e.NodeEnvironment ] [GKy7sPe] heap size [1007.3mb], compressed ordinary object pointers [true]
[2019-02-23T04:18:00,561][INFO ][o.e.n.Node ] [GKy7sPe] node name derived from node ID [GKy7sPeERPaWgzMLoxWQFg]; set [node.name] to override
[2019-02-23T04:18:00,589][INFO ][o.e.n.Node ] [GKy7sPe] version[6.6.1], pid[1], build[default/tar/1fd8f69/2019-02-13T17:10:04.160291Z], OS[Linux/4.9.125-linuxkit/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/11.0.1/11.0.1+13]
[2019-02-23T04:18:00,592][INFO ][o.e.n.Node ] [GKy7sPe] JVM arguments [-Xms1g, -Xmx1g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.io.tmpdir=/tmp/elasticsearch-10308254911807837384, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=data, -XX:ErrorFile=logs/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=logs/gc.log:utctime,pid,tags:filecount=32,filesize=64m, -Djava.locale.providers=COMPAT, -XX:UseAVX=2, -Des.cgroups.hierarchy.override=/, -Des.path.home=/usr/share/elasticsearch, -Des.path.conf=/usr/share/elasticsearch/config, -Des.distribution.flavor=default, -Des.distribution.type=tar]
[2019-02-23T04:18:15,059][INFO ][o.e.p.PluginsService ] [GKy7sPe] loaded module [aggs-matrix-stats]
[2019-02-23T04:18:15,059][INFO ][o.e.p.PluginsService ] [GKy7sPe] loaded module [analysis-common]
[2019-02-23T04:18:15,061][INFO ][o.e.p.PluginsService ] [GKy7sPe] loaded module [ingest-common]
[2019-02-23T04:18:15,063][INFO ][o.e.p.PluginsService ] [GKy7sPe] loaded module [lang-expression]
[2019-02-23T04:18:15,064][INFO ][o.e.p.PluginsService ] [GKy7sPe] loaded module [lang-mustache]
[2019-02-23T04:18:15,068][INFO ][o.e.p.PluginsService ] [GKy7sPe] loaded module [lang-painless]
[2019-02-23T04:18:15,068][INFO ][o.e.p.PluginsService ] [GKy7sPe] loaded module [mapper-extras]
[2019-02-23T04:18:15,070][INFO ][o.e.p.PluginsService ] [GKy7sPe] loaded module [parent-join]
[2019-02-23T04:18:15,071][INFO ][o.e.p.PluginsService ] [GKy7sPe] loaded module [percolator]
[2019-02-23T04:18:15,071][INFO ][o.e.p.PluginsService ] [GKy7sPe] loaded module [rank-eval]
[2019-02-23T04:18:15,072][INFO ][o.e.p.PluginsService ] [GKy7sPe] loaded module [reindex]
[2019-02-23T04:18:15,072][INFO ][o.e.p.PluginsService ] [GKy7sPe] loaded module [repository-url]
[2019-02-23T04:18:15,072][INFO ][o.e.p.PluginsService ] [GKy7sPe] loaded module [transport-netty4]
[2019-02-23T04:18:15,074][INFO ][o.e.p.PluginsService ] [GKy7sPe] loaded module [tribe]
[2019-02-23T04:18:15,087][INFO ][o.e.p.PluginsService ] [GKy7sPe] loaded module [x-pack-ccr]
[2019-02-23T04:18:15,088][INFO ][o.e.p.PluginsService ] [GKy7sPe] loaded module [x-pack-core]
[2019-02-23T04:18:15,089][INFO ][o.e.p.PluginsService ] [GKy7sPe] loaded module [x-pack-deprecation]
[2019-02-23T04:18:15,091][INFO ][o.e.p.PluginsService ] [GKy7sPe] loaded module [x-pack-graph]
[2019-02-23T04:18:15,094][INFO ][o.e.p.PluginsService ] [GKy7sPe] loaded module [x-pack-ilm]
[2019-02-23T04:18:15,096][INFO ][o.e.p.PluginsService ] [GKy7sPe] loaded module [x-pack-logstash]
[2019-02-23T04:18:15,097][INFO ][o.e.p.PluginsService ] [GKy7sPe] loaded module [x-pack-ml]
[2019-02-23T04:18:15,098][INFO ][o.e.p.PluginsService ] [GKy7sPe] loaded module [x-pack-monitoring]
[2019-02-23T04:18:15,099][INFO ][o.e.p.PluginsService ] [GKy7sPe] loaded module [x-pack-rollup]
[2019-02-23T04:18:15,100][INFO ][o.e.p.PluginsService ] [GKy7sPe] loaded module [x-pack-security]
[2019-02-23T04:18:15,102][INFO ][o.e.p.PluginsService ] [GKy7sPe] loaded module [x-pack-sql]
[2019-02-23T04:18:15,102][INFO ][o.e.p.PluginsService ] [GKy7sPe] loaded module [x-pack-upgrade]
[2019-02-23T04:18:15,102][INFO ][o.e.p.PluginsService ] [GKy7sPe] loaded module [x-pack-watcher]
[2019-02-23T04:18:15,105][INFO ][o.e.p.PluginsService ] [GKy7sPe] loaded plugin [ingest-geoip]
[2019-02-23T04:18:15,105][INFO ][o.e.p.PluginsService ] [GKy7sPe] loaded plugin [ingest-user-agent]
[2019-02-23T04:18:44,704][INFO ][o.e.x.s.a.s.FileRolesStore] [GKy7sPe] parsed [0] roles from file [/usr/share/elasticsearch/config/roles.yml]
[2019-02-23T04:18:48,619][INFO ][o.e.x.m.p.l.CppLogMessageHandler] [GKy7sPe] [controller/87] [Main.cc#109] controller (64 bit): Version 6.6.1 (Build a033f1b9679cab) Copyright (c) 2019 Elasticsearch BV
[2019-02-23T04:18:53,554][INFO ][o.e.d.DiscoveryModule ] [GKy7sPe] using discovery type [single-node] and host providers [settings]
[2019-02-23T04:18:57,834][INFO ][o.e.n.Node ] [GKy7sPe] initialized
[2019-02-23T04:18:57,836][INFO ][o.e.n.Node ] [GKy7sPe] starting ...
[2019-02-23T04:18:59,060][INFO ][o.e.t.TransportService ] [GKy7sPe] publish_address {172.17.0.2:9300}, bound_addresses {0.0.0.0:9300}
[2019-02-23T04:18:59,423][INFO ][o.e.h.n.Netty4HttpServerTransport] [GKy7sPe] publish_address {172.17.0.2:9200}, bound_addresses {0.0.0.0:9200}
[2019-02-23T04:18:59,431][INFO ][o.e.n.Node ] [GKy7sPe] started
[2019-02-23T04:19:00,187][WARN ][o.e.x.s.a.s.m.NativeRoleMappingStore] [GKy7sPe] Failed to clear cache for realms [[]]
[2019-02-23T04:19:00,657][INFO ][o.e.g.GatewayService ] [GKy7sPe] recovered [0] indices into cluster_state
[2019-02-23T04:19:02,610][INFO ][o.e.c.m.MetaDataIndexTemplateService] [GKy7sPe] adding template [.watch-history-9] for index patterns [.watcher-history-9*]
[2019-02-23T04:19:02,960][INFO ][o.e.c.m.MetaDataIndexTemplateService] [GKy7sPe] adding template [.watches] for index patterns [.watches*]
[2019-02-23T04:19:03,406][INFO ][o.e.c.m.MetaDataIndexTemplateService] [GKy7sPe] adding template [.triggered_watches] for index patterns [.triggered_watches*]
[2019-02-23T04:19:03,798][INFO ][o.e.c.m.MetaDataIndexTemplateService] [GKy7sPe] adding template [.monitoring-logstash] for index patterns [.monitoring-logstash-6-*]
[2019-02-23T04:19:04,277][INFO ][o.e.c.m.MetaDataIndexTemplateService] [GKy7sPe] adding template [.monitoring-es] for index patterns [.monitoring-es-6-*]
[2019-02-23T04:19:04,568][INFO ][o.e.c.m.MetaDataIndexTemplateService] [GKy7sPe] adding template [.monitoring-alerts] for index patterns [.monitoring-alerts-6]
[2019-02-23T04:19:04,944][INFO ][o.e.c.m.MetaDataIndexTemplateService] [GKy7sPe] adding template [.monitoring-beats] for index patterns [.monitoring-beats-6-*]
[2019-02-23T04:19:05,265][INFO ][o.e.c.m.MetaDataIndexTemplateService] [GKy7sPe] adding template [.monitoring-kibana] for index patterns [.monitoring-kibana-6-*]
[2019-02-23T04:19:06,992][INFO ][o.e.l.LicenseService ] [GKy7sPe] license [c7497c27-896c-441b-82c5-c33bc011f901] mode [basic] - valid
When I tried localhost:9200 in my browser, it keeps on waiting for response but elastic search is not responding.
Could someone share some inputs here?

Elevating #val 's comment to an answer because I was facing the same issue and the problem is indeed that docker binds to the external IP address and doesn't bind to localhost at all. After that comment, I tried to ping my cluster from my laptop and I could and indeed pinging 0.0.0.0:9200/ instead of 127.0.0.1:9200 worked from the server.

Related

Docker - image - jupyter pyspark

I've uploaded the image to the docker hub "jupyter/pyspark-notebook:latest", it's running and I have front-end access to code. But my code is dependent on some packages, I've tried to install using the docker desktop terminal as shown in the image below and it returns the message that the package was installed, but when I run the code it says that the package was not found. Another way was through Spark Session and it runs without errors but returns the message that the package was not found.
Can you help me?
$ spark-shell --packages com.crealytics:spark-excel_2.12:3.2.2_0.18.0
:: loading settings :: url = jar:file:/usr/local/spark-3.3.1-bin-hadoop3/jars/ivy-2.5.0.jar!/org/apache/ivy/core/settings/ivysettings.xml
Ivy Default Cache set to: /home/jovyan/.ivy2/cache
The jars for the packages stored in: /home/jovyan/.ivy2/jars
com.crealytics#spark-excel_2.12 added as a dependency
:: resolving dependencies :: org.apache.spark#spark-submit-parent-13eb0764-e97a-46aa-93c3-386249b15f8b;1.0
confs: [default]
found com.crealytics#spark-excel_2.12;3.2.2_0.18.0 in central
found org.apache.poi#poi;5.2.2 in central
found commons-codec#commons-codec;1.15 in central
found org.apache.commons#commons-collections4;4.4 in central
found org.apache.commons#commons-math3;3.6.1 in central
found commons-io#commons-io;2.11.0 in central
found com.zaxxer#SparseBitSet;1.2 in central
found org.apache.logging.log4j#log4j-api;2.17.2 in central
found org.apache.poi#poi-ooxml;5.2.2 in central
found org.apache.poi#poi-ooxml-lite;5.2.2 in central
found org.apache.xmlbeans#xmlbeans;5.0.3 in central
found org.apache.commons#commons-compress;1.21 in central
found com.github.virtuald#curvesapi;1.07 in central
found com.norbitltd#spoiwo_2.12;2.2.1 in central
found com.github.tototoshi#scala-csv_2.12;1.3.10 in central
found com.github.pjfanning#excel-streaming-reader;4.0.1 in central
found com.github.pjfanning#poi-shared-strings;2.5.3 in central
found org.slf4j#slf4j-api;1.7.36 in central
found com.h2database#h2;2.1.212 in central
found org.apache.commons#commons-text;1.9 in central
found org.apache.commons#commons-lang3;3.11 in central
found org.scala-lang.modules#scala-collection-compat_2.12;2.8.1 in central
downloading https://repo1.maven.org/maven2/com/crealytics/spark-excel_2.12/3.2.2_0.18.0/spark-excel_2.12-3.2.2_0.18.0.jar ...
[SUCCESSFUL ] com.crealytics#spark-excel_2.12;3.2.2_0.18.0!spark-excel_2.12.jar (3870ms)
downloading https://repo1.maven.org/maven2/org/apache/poi/poi/5.2.2/poi-5.2.2.jar ...
[SUCCESSFUL ] org.apache.poi#poi;5.2.2!poi.jar (958ms)
downloading https://repo1.maven.org/maven2/org/apache/poi/poi-ooxml/5.2.2/poi-ooxml-5.2.2.jar ...
[SUCCESSFUL ] org.apache.poi#poi-ooxml;5.2.2!poi-ooxml.jar (804ms)
downloading https://repo1.maven.org/maven2/org/apache/poi/poi-ooxml-lite/5.2.2/poi-ooxml-lite-5.2.2.jar ...
[SUCCESSFUL ] org.apache.poi#poi-ooxml-lite;5.2.2!poi-ooxml-lite.jar (1300ms)
downloading https://repo1.maven.org/maven2/org/apache/xmlbeans/xmlbeans/5.0.3/xmlbeans-5.0.3.jar ...
[SUCCESSFUL ] org.apache.xmlbeans#xmlbeans;5.0.3!xmlbeans.jar (680ms)
downloading https://repo1.maven.org/maven2/com/norbitltd/spoiwo_2.12/2.2.1/spoiwo_2.12-2.2.1.jar ...
[SUCCESSFUL ] com.norbitltd#spoiwo_2.12;2.2.1!spoiwo_2.12.jar (408ms)
downloading https://repo1.maven.org/maven2/com/github/pjfanning/excel-streaming-reader/4.0.1/excel-streaming-reader-4.0.1.jar ...
[SUCCESSFUL ] com.github.pjfanning#excel-streaming-reader;4.0.1!excel-streaming-reader.jar (331ms)
downloading https://repo1.maven.org/maven2/com/github/pjfanning/poi-shared-strings/2.5.3/poi-shared-strings-2.5.3.jar ...
[SUCCESSFUL ] com.github.pjfanning#poi-shared-strings;2.5.3!poi-shared-strings.jar (317ms)
downloading https://repo1.maven.org/maven2/commons-io/commons-io/2.11.0/commons-io-2.11.0.jar ...
[SUCCESSFUL ] commons-io#commons-io;2.11.0!commons-io.jar (331ms)
downloading https://repo1.maven.org/maven2/org/apache/commons/commons-compress/1.21/commons-compress-1.21.jar ...
[SUCCESSFUL ] org.apache.commons#commons-compress;1.21!commons-compress.jar (478ms)
downloading https://repo1.maven.org/maven2/org/apache/logging/log4j/log4j-api/2.17.2/log4j-api-2.17.2.jar ...
[SUCCESSFUL ] org.apache.logging.log4j#log4j-api;2.17.2!log4j-api.jar (367ms)
downloading https://repo1.maven.org/maven2/com/zaxxer/SparseBitSet/1.2/SparseBitSet-1.2.jar ...
[SUCCESSFUL ] com.zaxxer#SparseBitSet;1.2!SparseBitSet.jar (297ms)
downloading https://repo1.maven.org/maven2/org/apache/commons/commons-collections4/4.4/commons-collections4-4.4.jar ...
[SUCCESSFUL ] org.apache.commons#commons-collections4;4.4!commons-collections4.jar (452ms)
downloading https://repo1.maven.org/maven2/com/github/virtuald/curvesapi/1.07/curvesapi-1.07.jar ...
[SUCCESSFUL ] com.github.virtuald#curvesapi;1.07!curvesapi.jar (335ms)
downloading https://repo1.maven.org/maven2/commons-codec/commons-codec/1.15/commons-codec-1.15.jar ...
[SUCCESSFUL ] commons-codec#commons-codec;1.15!commons-codec.jar (372ms)
downloading https://repo1.maven.org/maven2/org/apache/commons/commons-math3/3.6.1/commons-math3-3.6.1.jar ...
[SUCCESSFUL ] org.apache.commons#commons-math3;3.6.1!commons-math3.jar (744ms)
downloading https://repo1.maven.org/maven2/org/scala-lang/modules/scala-collection-compat_2.12/2.8.1/scala-collection-compat_2.12-2.8.1.jar ...
[SUCCESSFUL ] org.scala-lang.modules#scala-collection-compat_2.12;2.8.1!scala-collection-compat_2.12.jar (371ms)
downloading https://repo1.maven.org/maven2/com/github/tototoshi/scala-csv_2.12/1.3.10/scala-csv_2.12-1.3.10.jar ...
[SUCCESSFUL ] com.github.tototoshi#scala-csv_2.12;1.3.10!scala-csv_2.12.jar (295ms)
downloading https://repo1.maven.org/maven2/org/slf4j/slf4j-api/1.7.36/slf4j-api-1.7.36.jar ...
[SUCCESSFUL ] org.slf4j#slf4j-api;1.7.36!slf4j-api.jar (285ms)
downloading https://repo1.maven.org/maven2/com/h2database/h2/2.1.212/h2-2.1.212.jar ...
[SUCCESSFUL ] com.h2database#h2;2.1.212!h2.jar (853ms)
downloading https://repo1.maven.org/maven2/org/apache/commons/commons-text/1.9/commons-text-1.9.jar ...
[SUCCESSFUL ] org.apache.commons#commons-text;1.9!commons-text.jar (340ms)
downloading https://repo1.maven.org/maven2/org/apache/commons/commons-lang3/3.11/commons-lang3-3.11.jar ...
[SUCCESSFUL ] org.apache.commons#commons-lang3;3.11!commons-lang3.jar (414ms)
:: resolution report :: resolve 24141ms :: artifacts dl 14647ms
:: modules in use:
com.crealytics#spark-excel_2.12;3.2.2_0.18.0 from central in [default]
com.github.pjfanning#excel-streaming-reader;4.0.1 from central in [default]
com.github.pjfanning#poi-shared-strings;2.5.3 from central in [default]
com.github.tototoshi#scala-csv_2.12;1.3.10 from central in [default]
com.github.virtuald#curvesapi;1.07 from central in [default]
com.h2database#h2;2.1.212 from central in [default]
com.norbitltd#spoiwo_2.12;2.2.1 from central in [default]
com.zaxxer#SparseBitSet;1.2 from central in [default]
commons-codec#commons-codec;1.15 from central in [default]
commons-io#commons-io;2.11.0 from central in [default]
org.apache.commons#commons-collections4;4.4 from central in [default]
org.apache.commons#commons-compress;1.21 from central in [default]
org.apache.commons#commons-lang3;3.11 from central in [default]
org.apache.commons#commons-math3;3.6.1 from central in [default]
org.apache.commons#commons-text;1.9 from central in [default]
org.apache.logging.log4j#log4j-api;2.17.2 from central in [default]
org.apache.poi#poi;5.2.2 from central in [default]
org.apache.poi#poi-ooxml;5.2.2 from central in [default]
org.apache.poi#poi-ooxml-lite;5.2.2 from central in [default]
org.apache.xmlbeans#xmlbeans;5.0.3 from central in [default]
org.scala-lang.modules#scala-collection-compat_2.12;2.8.1 from central in [default]
org.slf4j#slf4j-api;1.7.36 from central in [default]
:: evicted modules:
org.apache.logging.log4j#log4j-api;2.17.1 by [org.apache.logging.log4j#log4j-api;2.17.2] in [default]
org.apache.poi#poi;5.2.1 by [org.apache.poi#poi;5.2.2] in [default]
org.apache.poi#poi-ooxml;5.2.1 by [org.apache.poi#poi-ooxml;5.2.2] in [default]
---------------------------------------------------------------------
| | modules || artifacts |
| conf | number| search|dwnlded|evicted|| number|dwnlded|
---------------------------------------------------------------------
| default | 25 | 22 | 22 | 3 || 22 | 22 |
---------------------------------------------------------------------
:: retrieving :: org.apache.spark#spark-submit-parent-13eb0764-e97a-46aa-93c3-386249b15f8b
confs: [default]
22 artifacts copied, 0 already retrieved (40428kB/101ms)
23/01/26 20:28:56 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
23/01/26 20:29:05 WARN Utils: Service 'SparkUI' could not bind on port 4040. Attempting port 4041.
Spark context Web UI available at http://104c93401bff:4041
Spark context available as 'sc' (master = local[*], app id = local-1674764945256).
Spark session available as 'spark'.
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/___/ .__/\_,_/_/ /_/\_\ version 3.3.1
/_/
Using Scala version 2.12.15 (OpenJDK 64-Bit Server VM, Java 17.0.5)
Type in expressions to have them evaluated.
Type :help for more information.
scala>
Py4JJavaError: An error occurred while calling o37.load.
: java.lang.ClassNotFoundException:
Failed to find data source: com.crealytics.spark. Please find packages at
https://spark.apache.org/third-party-projects.html
at org.apache.spark.sql.errors.QueryExecutionErrors$.failedToFindDataSourceError(QueryExecutionErrors.scala:587)
at org.apache.spark.sql.execution.datasources.DataSource$.lookupDataSource(DataSource.scala:675)
at org.apache.spark.sql.execution.datasources.DataSource$.lookupDataSourceV2(DataSource.scala:725)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:207)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:185)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:568)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:182)
at py4j.ClientServerConnection.run(ClientServerConnection.java:106)
at java.base/java.lang.Thread.run(Thread.java:833)
Caused by: java.lang.ClassNotFoundException: com.crealytics.spark.DefaultSource
at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:445)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:587)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:520)
at org.apache.spark.sql.execution.datasources.DataSource$.$anonfun$lookupDataSource$5(DataSource.scala:661)
at scala.util.Try$.apply(Try.scala:213)
at org.apache.spark.sql.execution.datasources.DataSource$.$anonfun$lookupDataSource$4(DataSource.scala:661)
at scala.util.Failure.orElse(Try.scala:224)
at org.apache.spark.sql.execution.datasources.DataSource$.lookupDataSource(DataSource.scala:661)
... 15 more

How to increase the disk space of elastic search

I just used the below command to check the available disk space in my node.
Input: (Dev console)
GET /_cat/nodes?v&h=id,diskTotal,diskUsed,diskAvail,diskUsedPercent
Output:
id diskTotal diskUsed diskAvail diskUsedPercent
vcgA 9.6gb 8.6gb 960.4mb 90.26
I have a single node and It shows 960.4mb as available space. Is it possible to increase to 2gb ? and Can anyone tell me how can I achieve this?
Also, I just wonder that I don't have any index created in cluster but not sure how does it occupied 8.6gb space.
Also I added the below config properties in elasticsearch.yml file
cluster.routing.allocation.disk.threshold_enabled: true
cluster.routing.allocation.disk.watermark.flood_stage: 20gb
cluster.routing.allocation.disk.watermark.low: 30gb
cluster.routing.allocation.disk.watermark.high: 25gb
Log for elastic search:
sudo docker logs c8eadd9d92f6
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
[2019-12-08T18:16:10,968][INFO ][o.e.e.NodeEnvironment ] [EWlGsNg] using [1] data paths, mounts [[/ (overlay)]], net usable_space [959.1mb], net total_space [9.6gb], types [overlay]
[2019-12-08T18:16:10,972][INFO ][o.e.e.NodeEnvironment ] [EWlGsNg] heap size [9.8gb], compressed ordinary object pointers [true]
[2019-12-08T18:16:10,975][INFO ][o.e.n.Node ] [EWlGsNg] node name derived from node ID [EWlGsNg9R4ChiOSACmSE5Q]; set [node.name] to override
[2019-12-08T18:16:10,975][INFO ][o.e.n.Node ] [EWlGsNg] version[6.6.0], pid[1], build[oss/tar/a9861f4/2019-01-24T11:27:09.439740Z], OS[Linux/4.15.0-1044-gcp/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/11.0.1/11.0.1+13]
[2019-12-08T18:16:10,975][INFO ][o.e.n.Node ] [EWlGsNg] JVM arguments [-Xms1g, -Xmx1g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.io.tmpdir=/tmp/elasticsearch-14249827840908502393, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=data, -XX:ErrorFile=logs/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=logs/gc.log:utctime,pid,tags:filecount=32,filesize=64m, -Djava.locale.providers=COMPAT, -XX:UseAVX=2, -Des.cgroups.hierarchy.override=/, -Xmx10g, -Xms10g, -Des.path.home=/usr/share/elasticsearch, -Des.path.conf=/usr/share/elasticsearch/config, -Des.distribution.flavor=oss, -Des.distribution.type=tar]
[2019-12-08T18:16:11,844][INFO ][o.e.p.PluginsService ] [EWlGsNg] loaded module [aggs-matrix-stats]
[2019-12-08T18:16:11,845][INFO ][o.e.p.PluginsService ] [EWlGsNg] loaded module [analysis-common]
[2019-12-08T18:16:11,845][INFO ][o.e.p.PluginsService ] [EWlGsNg] loaded module [ingest-common]
[2019-12-08T18:16:11,845][INFO ][o.e.p.PluginsService ] [EWlGsNg] loaded module [lang-expression]
[2019-12-08T18:16:11,845][INFO ][o.e.p.PluginsService ] [EWlGsNg] loaded module [lang-mustache]
[2019-12-08T18:16:11,845][INFO ][o.e.p.PluginsService ] [EWlGsNg] loaded module [lang-painless]
[2019-12-08T18:16:11,845][INFO ][o.e.p.PluginsService ] [EWlGsNg] loaded module [mapper-extras]
[2019-12-08T18:16:11,845][INFO ][o.e.p.PluginsService ] [EWlGsNg] loaded module [parent-join]
[2019-12-08T18:16:11,846][INFO ][o.e.p.PluginsService ] [EWlGsNg] loaded module [percolator]
[2019-12-08T18:16:11,846][INFO ][o.e.p.PluginsService ] [EWlGsNg] loaded module [rank-eval]
[2019-12-08T18:16:11,846][INFO ][o.e.p.PluginsService ] [EWlGsNg] loaded module [reindex]
[2019-12-08T18:16:11,846][INFO ][o.e.p.PluginsService ] [EWlGsNg] loaded module [repository-url]
[2019-12-08T18:16:11,846][INFO ][o.e.p.PluginsService ] [EWlGsNg] loaded module [transport-netty4]
[2019-12-08T18:16:11,846][INFO ][o.e.p.PluginsService ] [EWlGsNg] loaded module [tribe]
[2019-12-08T18:16:11,847][INFO ][o.e.p.PluginsService ] [EWlGsNg] loaded plugin [ingest-geoip]
[2019-12-08T18:16:11,847][INFO ][o.e.p.PluginsService ] [EWlGsNg] loaded plugin [ingest-user-agent]
[2019-12-08T18:16:15,363][INFO ][o.e.d.DiscoveryModule ] [EWlGsNg] using discovery type [single-node] and host providers [settings]
[2019-12-08T18:16:15,919][INFO ][o.e.n.Node ] [EWlGsNg] initialized
[2019-12-08T18:16:15,919][INFO ][o.e.n.Node ] [EWlGsNg] starting ...
[2019-12-08T18:16:16,619][INFO ][o.e.t.TransportService ] [EWlGsNg] publish_address {192.20.9.2:9300}, bound_addresses {0.0.0.0:9300}
[2019-12-08T18:16:16,747][INFO ][o.e.h.n.Netty4HttpServerTransport] [EWlGsNg] publish_address {192.20.9.2:9200}, bound_addresses {0.0.0.0:9200}
[2019-12-08T18:16:16,747][INFO ][o.e.n.Node ] [EWlGsNg] started
[2019-12-08T18:16:16,781][INFO ][o.e.g.GatewayService ] [EWlGsNg] recovered [0] indices into cluster_state
[2019-12-08T18:16:16,931][INFO ][o.e.m.j.JvmGcMonitorService] [EWlGsNg] [gc][1] overhead, spent [447ms] collecting in the last [1s]
[2019-12-08T18:16:17,184][INFO ][o.e.c.m.MetaDataCreateIndexService] [EWlGsNg] [.kibana_1] creating index, cause [api], templates [], shards [1]/[1], mappings [doc]
[2019-12-08T18:16:17,195][INFO ][o.e.c.r.a.AllocationService] [EWlGsNg] updating number_of_replicas to [0] for indices [.kibana_1]
[2019-12-08T18:16:17,468][INFO ][o.e.c.r.a.AllocationService] [EWlGsNg] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.kibana_1][0]] ...]).
Note: I'm using ELK 6.6.0 version
Can someone please help me on this. I stuck and not sure how can I achieve this?
If you ran your elastic search via docker, you can increase your total disk space via Docker Desktop -> Settings -> Resources -> Virtual disk limit. Doing this should automatically increase the disk.total.
If this isn't achievable, I suggest you have a read of the Official ElasticSearch docs on Increase the disk capacity of data nodes for further information.

ElasticSearch in Docker dies silently and restarts, but why?

I am monitoring a docker that runs ElasticSearch 6.2.3
Every day it dies, and I have not been able to figure out why...
I think it is memory, but where can I find proof of that ?
Running top on the host (Linux) I get
Virtual Mem = 53 Gig
Res = 26.7
right now ! that is...
docker docker inspect gives me this
"Memory": 34225520640,
"CpusetMems": "",
"KernelMemory": 0,
"MemoryReservation": 0,
"MemorySwap": 34225520640,
"MemorySwappiness": null,
"Name": "memlock",
The JVM params are
/usr/bin/java
-Xms1g
-Xmx1g
-XX:+UseConcMarkSweepGC
-XX:CMSInitiatingOccupancyFraction=75
-XX:+UseCMSInitiatingOccupancyOnly
-XX:+AlwaysPreTouch
-Xss1m
-Djava.awt.headless=true
-Dfile.encoding=UTF-8
-Djna.nosys=true
-XX:-OmitStackTraceInFastThrow
-Dio.netty.noUnsafe=true
-Dio.netty.noKeySetOptimization=true
-Dio.netty.recycler.maxCapacityPerThread=0
-Dlog4j.shutdownHookEnabled=false
-Dlog4j2.disable.jmx=true
-Djava.io.tmpdir=/usr/share/elasticsearch/tmp
-XX:+HeapDumpOnOutOfMemoryError
-XX:+PrintGCDetails
-XX:+PrintGCDateStamps
-XX:+PrintTenuringDistribution
-XX:+PrintGCApplicationStoppedTime
-Xloggc:logs/gc.log
-XX:+UseGCLogFileRotation
-XX:NumberOfGCLogFiles=32
-XX:GCLogFileSize=64m
-Des.cgroups.hierarchy.override=/
-Xms26112m
-Xmx26112m
-Des.path.home=/usr/share/elasticsearch
-Des.path.conf=/usr/share/elasticsearch/config
-cp
/usr/share/elasticsearch/lib/*
org.elasticsearch.bootstrap.Elasticsearch
and here is the log
[2019-01-30T07:14:01,278] [app-mesos-orders_api-2019.01.30/6l7Ga1I5T3qhLKYmWjQpRA] update_mapping [doc]
[2019-01-30T07:25:53,489] initializing ...
[2019-01-30T07:25:53,581] using [1] data paths, mounts [[/usr/share/elasticsearch/data (/dev/mapper/vg_tobias-lv_tobias)]], net usable_space [126.8gb], net total_space [199.8gb], types [xfs]
[2019-01-30T07:25:53,581] heap size [25.4gb], compressed ordinary object pointers [true]
[2019-01-30T07:26:12,390], node ID [-sJqW_h1TKy9c_Ka08In0A]
[2019-01-30T07:26:12,391] version[6.2.3], pid[1], build[c59ff00/2018-03-13T10:06:29.741383Z], OS[Linux/3.10.0-862.11.6.el7.x86_64/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/1.8.0_151/25.151-b12]
[2019-01-30T07:26:12,391] JVM arguments [-Xms1g, -Xmx1g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.io.tmpdir=/usr/share/elasticsearch/tmp, -XX:+HeapDumpOnOutOfMemoryError, -XX:+PrintGCDetails, -XX:+PrintGCDateStamps, -XX:+PrintTenuringDistribution, -XX:+PrintGCApplicationStoppedTime, -Xloggc:logs/gc.log, -XX:+UseGCLogFileRotation, -XX:NumberOfGCLogFiles=32, -XX:GCLogFileSize=64m, -Des.cgroups.hierarchy.override=/, -Xms26112m, -Xmx26112m, -Des.path.home=/usr/share/elasticsearch, -Des.path.conf=/usr/share/elasticsearch/config]
[2019-01-30T07:26:13,008] loaded module [aggs-matrix-stats]
[2019-01-30T07:26:13,008] loaded module [analysis-common]
[2019-01-30T07:26:13,008] loaded module [ingest-common]
[2019-01-30T07:26:13,008] loaded module [lang-expression]
[2019-01-30T07:26:13,008] loaded module [lang-mustache]
[2019-01-30T07:26:13,009] loaded module [lang-painless]
[2019-01-30T07:26:13,009] loaded module [mapper-extras]
[2019-01-30T07:26:13,009] loaded module [parent-join]
[2019-01-30T07:26:13,009] loaded module [percolator]
[2019-01-30T07:26:13,009] loaded module [rank-eval]
[2019-01-30T07:26:13,009] loaded module [reindex]
[2019-01-30T07:26:13,009] loaded module [repository-url]
[2019-01-30T07:26:13,009] loaded module [transport-netty4]
[2019-01-30T07:26:13,009] loaded module [tribe]
[2019-01-30T07:26:13,009] no plugins loaded
[2019-01-30T07:26:19,947] using discovery type [zen]
[2019-01-30T07:26:20,444] initialized
[2019-01-30T07:26:20,444] starting ...
[2019-01-30T07:26:20,600] publish_address {172.16.44.8:9300}, bound_addresses {172.17.0.14:9300}
[2019-01-30T07:26:21,507] bound or publishing to a non-loopback address, enforcing bootstrap checks
[2019-01-30T07:26:24,855] zen-disco-elected-as-master ([0] nodes joined), reason: new_master {elasticsearch-1}{-sJqW_h1TKy9c_Ka08In0A}{iGTgehBjQ3yPRm9nlTLbYw}{172.16.44.8}{172.16.44.8:9300}{rack=rack1}
[2019-01-30T07:26:24,861] new_master {elasticsearch-1}{-sJqW_h1TKy9c_Ka08In0A}{iGTgehBjQ3yPRm9nlTLbYw}{172.16.44.8}{172.16.44.8:9300}{rack=rack1}, reason: apply cluster state (from master [master {elasticsearch-1}{-sJqW_h1TKy9c_Ka08In0A}{iGTgehBjQ3yPRm9nlTLbYw}{172.16.44.8}{172.16.44.8:9300}{rack=rack1} committed version [1] source [zen-disco-elected-as-master ([0] nodes joined)]])
[2019-01-30T07:26:24,881] publish_address {172.16.44.8:9200}, bound_addresses {172.17.0.14:9200}
[2019-01-30T07:26:24,881] started
[2019-01-30T07:26:37,033] recovered [1535] indices into cluster_state
I am thinking that the Docker container runs out of memory and dies silently
How can I prove this ? and what can I do to solve this ?

Solr Initialization failure in production environment with ruby on rails

I am using Solr V5.3.1 with rails 4.2.2 and sunspot-solr are the rails gem on ubuntu 15.04
I have as Solr initialisation failure:
SolrCore Initialization Failures
production: org.apache.solr.common.SolrException:org.apache.solr.common.SolrException: Could not load conf for core production: Error loading solr config from /var/solr/data/production/conf/solrconfig.xml
Please check your logs for more information
The logs are as follows (solr.log):
2015-12-09 14:29:28.634 ERROR (qtp1450821318-17) [ ] o.a.s.c.SolrCore org.apache.solr.common.SolrException: Error CREATEing SolrCore 'production': Unable to create core [production] Caused by: Can't find resource 'solrconfig.xml' in classpath or '/var/solr/data/production/conf'
at org.apache.solr.handler.admin.CoreAdminHandler.handleCreateAction(CoreAdminHandler.java:662)
at org.apache.solr.handler.admin.CoreAdminHandler.handleRequestInternal(CoreAdminHandler.java:214)
at org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:194)
at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
at org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:675)
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:443)
at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:214)
at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:179)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
at org.eclipse.jetty.server.Server.handle(Server.java:499)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
at org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.solr.common.SolrException: Unable to create core [production]
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:737)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:697)
at org.apache.solr.handler.admin.CoreAdminHandler.handleCreateAction(CoreAdminHandler.java:629)
... 27 more
Caused by: org.apache.solr.common.SolrException: Could not load conf for core production: Error loading solr config from /var/solr/data/production/conf/solrconfig.xml
at org.apache.solr.core.ConfigSetService.getConfig(ConfigSetService.java:80)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:721)
... 29 more
Caused by: org.apache.solr.common.SolrException: Error loading solr config from /var/solr/data/production/conf/solrconfig.xml
at org.apache.solr.core.SolrConfig.readFromResourceLoader(SolrConfig.java:186)
at org.apache.solr.core.ConfigSetService.createSolrConfig(ConfigSetService.java:94)
at org.apache.solr.core.ConfigSetService.getConfig(ConfigSetService.java:74)
... 30 more
Caused by: org.apache.solr.core.SolrResourceNotFoundException: Can't find resource 'solrconfig.xml' in classpath or '/var/solr/data/production/conf'
at org.apache.solr.core.SolrResourceLoader.openResource(SolrResourceLoader.java:363)
at org.apache.solr.core.SolrResourceLoader.openConfig(SolrResourceLoader.java:309)
at org.apache.solr.core.Config.<init>(Config.java:122)
at org.apache.solr.core.Config.<init>(Config.java:92)
at org.apache.solr.core.SolrConfig.<init>(SolrConfig.java:201)
at org.apache.solr.core.SolrConfig.readFromResourceLoader(SolrConfig.java:178)
... 32 more
2015-12-09 14:29:28.635 INFO (qtp1450821318-17) [ ] o.a.s.s.SolrDispatchFilter [admin] webapp=null path=/admin/cores params={schema=schema.xml&dataDir=data&name=production&indexInfo=false&action=CREATE&collection=&shard=&wt=json&instanceDir=production&config=solrconfig.xml&_=1449671367929} status=400 QTime=674
2015-12-09 14:29:39.624 INFO (qtp1450821318-19) [ ] o.a.s.s.SolrDispatchFilter [admin] webapp=null path=/admin/info/properties params={wt=json&_=1449671379587} status=0 QTime=8
2015-12-09 14:29:41.793 INFO (qtp1450821318-18) [ ] o.a.s.s.SolrDispatchFilter [admin] webapp=null path=/admin/info/logging params={wt=json&since=0&_=1449671381576} status=0 QTime=188
2015-12-09 14:29:52.103 INFO (qtp1450821318-14) [ ] o.a.s.s.SolrDispatchFilter [admin] webapp=null path=/admin/info/logging params={wt=json&since=1449671368634&_=1449671392070} status=0 QTime=0
2015-12-09 14:30:02.145 INFO (qtp1450821318-17) [ ] o.a.s.s.SolrDispatchFilter [admin] webapp=null path=/admin/info/logging params={wt=json&since=1449671368634&_=1449671402114} status=0 QTime=0
2015-12-09 14:30:05.817 INFO (qtp1450821318-15) [ ] o.a.s.s.SolrDispatchFilter [admin] webapp=null path=/admin/cores params={wt=json&_=1449671405761} status=0 QTime=24
2015-12-09 14:40:20.780 INFO (qtp1450821318-17) [ ] o.a.s.s.SolrDispatchFilter [admin] webapp=null path=/admin/cores params={indexInfo=false&wt=json&_=1449672020748} status=0 QTime=1
2015-12-09 14:40:20.839 INFO (qtp1450821318-12) [ ] o.a.s.s.SolrDispatchFilter [admin] webapp=null path=/admin/info/system params={wt=json&_=1449672020806} status=0 QTime=5
The error suggests it cannot find solrconfig.xml in /var/solr/data/production/conf/solrconfig.xml - and indeed, that file does not exist.
Infact /var/solr/data exists, but /var/solr/data/production does not! /var/solr/data contains one file which is solr.xml
Can somebody please help me understand which piece of the solr setup I have missed out on - and a guide to help me finalise the configuration.
The problem was I had simply installed and started solr, without actually creating a collection called 'production'.
I did this by executing
/opt/solr/bin/solr create -c production
as user solr and it now works exactly as I expected it to !

Solr (Sunspot) runs, but won't start

When I run RAILS_ENV=production rake sunspot:solr:run, Solr starts as expected and the log looks something like this:
0 INFO (main) [ ] o.e.j.u.log Logging initialized #613ms
355 INFO (main) [ ] o.e.j.s.Server jetty-9.2.11.v20150529
380 WARN (main) [ ] o.e.j.s.h.RequestLogHandler !RequestLog
383 INFO (main) [ ] o.e.j.d.p.ScanningAppProvider Deployment monitor [file:/usr/local/rvm/gems/ruby-2.2.1/gems/sunspot_solr- 2.2.3/solr/server/contexts/] at interval 0
1392 INFO (main) [ ] o.e.j.w.StandardDescriptorProcessor NO JSP Support for /solr, did not find org.apache.jasper.servlet.JspServlet
1437 WARN (main) [ ] o.e.j.s.SecurityHandler ServletContext#o.e.j.w.WebAppContext#1f3ff9d4{/solr,file:/usr/local/rvm/gems/ruby-2.2.1/gems/sunspot_solr-2.2.3/solr/server/solr-webapp/webapp/,STARTING}{/usr/local/rvm/gems/ruby-2.2.1/gems/sunspot_solr-2.2.3/solr/server/solr-webapp/webapp} has uncovered http methods for path: /
1456 INFO (main) [ ] o.a.s.s.SolrDispatchFilter SolrDispatchFilter.init(): WebAppClassLoader=879601585#346da7b1
1487 INFO (main) [ ] o.a.s.c.SolrResourceLoader JNDI not configured for solr (NoInitialContextEx)
1488 INFO (main) [ ] o.a.s.c.SolrResourceLoader using system property solr.solr.home: /usr/local/rvm/gems/ruby-2.2.1/gems/sunspot_solr-2.2.3/solr/server/solr
1491 INFO (main) [ ] o.a.s.c.SolrResourceLoader new SolrResourceLoader for directory: '/usr/local/rvm/gems/ruby-2.2.1/gems/sunspot_solr-2.2.3/solr/server/solr/'
1738 INFO (main) [ ] o.a.s.c.SolrXmlConfig Loading container configuration from /usr/local/rvm/gems/ruby-2.2.1/gems/sunspot_solr-2.2.3/solr/server/solr/solr.xml
1848 INFO (main) [ ] o.a.s.c.CoresLocator Config-defined core root directory: /usr/local/rvm/gems/ruby-2.2.1/gems/sunspot_solr-2.2.3/solr/server/solr
1882 INFO (main) [ ] o.a.s.c.CoreContainer New CoreContainer 394200281
1882 INFO (main) [ ] o.a.s.c.CoreContainer Loading cores into CoreContainer [instanceDir=/usr/local/rvm/gems/ruby-2.2.1/gems/sunspot_solr-2.2.3/solr/server/solr/]
1883 INFO (main) [ ] o.a.s.c.CoreContainer loading shared library: /usr/local/rvm/gems/ruby-2.2.1/gems/sunspot_solr-2.2.3/solr/server/solr/lib
1883 WARN (main) [ ] o.a.s.c.SolrResourceLoader Can't find (or read) directory to add to classloader: lib (resolved as: /usr/local/rvm/gems/ruby-2.2.1/gems/sunspot_solr-2.2.3/solr/server/solr/lib).
1905 INFO (main) [ ] o.a.s.h.c.HttpShardHandlerFactory created with socketTimeout : 600000,connTimeout : 60000,maxConnectionsPerHost : 20,maxConnections : 10000,corePoolSize : 0,maximumPoolSize : 2147483647,maxThreadIdleTime : 5,sizeOfQueue : -1,fairnessPolicy : false,useRetries : false,
2333 INFO (main) [ ] o.a.s.u.UpdateShardHandler Creating UpdateShardHandler HTTP client with params: socketTimeout=600000&connTimeout=60000&retry=true
2338 INFO (main) [ ] o.a.s.l.LogWatcher SLF4J impl is org.slf4j.impl.Log4jLoggerFactory
2339 INFO (main) [ ] o.a.s.l.LogWatcher Registering Log Listener [Log4j (org.slf4j.impl.Log4jLoggerFactory)]
2341 INFO (main) [ ] o.a.s.c.CoreContainer Security conf doesn't exist. Skipping setup for authorization module.
2341 INFO (main) [ ] o.a.s.c.CoreContainer No authentication plugin used.
2379 INFO (main) [ ] o.a.s.c.CoresLocator Looking for core definitions underneath /usr/local/rvm/gems/ruby-2.2.1/gems/sunspot_solr-2.2.3/solr/server/solr
2385 INFO (main) [ ] o.a.s.c.CoresLocator Found 0 core definitions
2389 INFO (main) [ ] o.a.s.s.SolrDispatchFilter user.dir=/usr/local/rvm/gems/ruby-2.2.1/gems/sunspot_solr-2.2.3/solr/server
2390 INFO (main) [ ] o.a.s.s.SolrDispatchFilter SolrDispatchFilter.init() done
2435 INFO (main) [ ] o.e.j.s.h.ContextHandler Started o.e.j.w.WebAppContext#1f3ff9d4{/solr,file:/usr/local/rvm/gems/ruby-2.2.1/gems/sunspot_solr-2.2.3/solr/server/solr-webapp/webapp/,AVAILABLE}{/usr/local/rvm/gems/ruby-2.2.1/gems/sunspot_solr-2.2.3/solr/server/solr-webapp/webapp}
2458 INFO (main) [ ] o.e.j.s.ServerConnector Started ServerConnector#649ad901{HTTP/1.1}{0.0.0.0:8080}
2458 INFO (main) [ ] o.e.j.s.Server Started #3074ms
6677 INFO (qtp207710485-20) [ ] o.a.s.s.SolrDispatchFilter [admin] webapp=null path=/admin/cores params={indexInfo=false&_=1453515549920&wt=json} status=0 QTime=58
6796 INFO (qtp207710485-19) [ ] o.a.s.s.SolrDispatchFilter [admin] webapp=null path=/admin/info/system params={_=1453515550069&wt=json} status=0 QTime=23
I can also access localhost:8080/solr.
However, when I run RAILS_ENV=production rake sunspot:solr:start --trace, I get a normal trace:
** Invoke sunspot:solr:start (first_time)
** Invoke environment (first_time)
** Execute environment
** Execute sunspot:solr:start
Removing stale PID file at /home/rails/webapp/solr/pids/production/sunspot-solr-production.pid
Successfully started Solr ...
Yet I can't access localhost:8080 (it gives me an ERR_CONNECTION_REFUSED), and when I try to do anything else involving Solr, I get an Errno::ECONNREFUSED: Connection refused error.
For example, when I run RAILS_ENV=production rake sunspot:solr:reindex after starting Solr, I get:
Errno::ECONNREFUSED: Connection refused - {:data=>"<?xml version=\"1.0\" encoding=\"UTF-8\"?><delete><query>type:Piece</query></delete>", :headers=>{"Content-Type"=>"text/xml"}, :method=>:post, :params=>{:wt=>:ruby}, :query=>"wt=ruby", :path=>"update", :uri=>#<URI::HTTP http://localhost:8080/solr/update?wt=ruby>, :open_timeout=>nil, :read_timeout=>nil, :retry_503=>nil, :retry_after_limit=>nil}
....
Errno::ECONNREFUSED: Connection refused - connect(2) for "localhost" port 8080
My sunspot.yml file looks like this, after this:
production:
solr:
hostname: localhost
port: 8080 #tomcat defaults to port 8080
path: /solr
log_level: WARNING
solr_home: solr
The Solr server was working fine before. The connection errors started when I tried to seed the production db and received an EOFError: end of file reached. I can post the full trace for that error if needed.
Please help!
I had a similar situation happen. Sunspot would say that Solr had started successfully, but it never actually did start. Turns out I was using Java 1.6 and installing JDK 8 fixed it for me.

Resources