I've uploaded the image to the docker hub "jupyter/pyspark-notebook:latest", it's running and I have front-end access to code. But my code is dependent on some packages, I've tried to install using the docker desktop terminal as shown in the image below and it returns the message that the package was installed, but when I run the code it says that the package was not found. Another way was through Spark Session and it runs without errors but returns the message that the package was not found.
Can you help me?
$ spark-shell --packages com.crealytics:spark-excel_2.12:3.2.2_0.18.0
:: loading settings :: url = jar:file:/usr/local/spark-3.3.1-bin-hadoop3/jars/ivy-2.5.0.jar!/org/apache/ivy/core/settings/ivysettings.xml
Ivy Default Cache set to: /home/jovyan/.ivy2/cache
The jars for the packages stored in: /home/jovyan/.ivy2/jars
com.crealytics#spark-excel_2.12 added as a dependency
:: resolving dependencies :: org.apache.spark#spark-submit-parent-13eb0764-e97a-46aa-93c3-386249b15f8b;1.0
confs: [default]
found com.crealytics#spark-excel_2.12;3.2.2_0.18.0 in central
found org.apache.poi#poi;5.2.2 in central
found commons-codec#commons-codec;1.15 in central
found org.apache.commons#commons-collections4;4.4 in central
found org.apache.commons#commons-math3;3.6.1 in central
found commons-io#commons-io;2.11.0 in central
found com.zaxxer#SparseBitSet;1.2 in central
found org.apache.logging.log4j#log4j-api;2.17.2 in central
found org.apache.poi#poi-ooxml;5.2.2 in central
found org.apache.poi#poi-ooxml-lite;5.2.2 in central
found org.apache.xmlbeans#xmlbeans;5.0.3 in central
found org.apache.commons#commons-compress;1.21 in central
found com.github.virtuald#curvesapi;1.07 in central
found com.norbitltd#spoiwo_2.12;2.2.1 in central
found com.github.tototoshi#scala-csv_2.12;1.3.10 in central
found com.github.pjfanning#excel-streaming-reader;4.0.1 in central
found com.github.pjfanning#poi-shared-strings;2.5.3 in central
found org.slf4j#slf4j-api;1.7.36 in central
found com.h2database#h2;2.1.212 in central
found org.apache.commons#commons-text;1.9 in central
found org.apache.commons#commons-lang3;3.11 in central
found org.scala-lang.modules#scala-collection-compat_2.12;2.8.1 in central
downloading https://repo1.maven.org/maven2/com/crealytics/spark-excel_2.12/3.2.2_0.18.0/spark-excel_2.12-3.2.2_0.18.0.jar ...
[SUCCESSFUL ] com.crealytics#spark-excel_2.12;3.2.2_0.18.0!spark-excel_2.12.jar (3870ms)
downloading https://repo1.maven.org/maven2/org/apache/poi/poi/5.2.2/poi-5.2.2.jar ...
[SUCCESSFUL ] org.apache.poi#poi;5.2.2!poi.jar (958ms)
downloading https://repo1.maven.org/maven2/org/apache/poi/poi-ooxml/5.2.2/poi-ooxml-5.2.2.jar ...
[SUCCESSFUL ] org.apache.poi#poi-ooxml;5.2.2!poi-ooxml.jar (804ms)
downloading https://repo1.maven.org/maven2/org/apache/poi/poi-ooxml-lite/5.2.2/poi-ooxml-lite-5.2.2.jar ...
[SUCCESSFUL ] org.apache.poi#poi-ooxml-lite;5.2.2!poi-ooxml-lite.jar (1300ms)
downloading https://repo1.maven.org/maven2/org/apache/xmlbeans/xmlbeans/5.0.3/xmlbeans-5.0.3.jar ...
[SUCCESSFUL ] org.apache.xmlbeans#xmlbeans;5.0.3!xmlbeans.jar (680ms)
downloading https://repo1.maven.org/maven2/com/norbitltd/spoiwo_2.12/2.2.1/spoiwo_2.12-2.2.1.jar ...
[SUCCESSFUL ] com.norbitltd#spoiwo_2.12;2.2.1!spoiwo_2.12.jar (408ms)
downloading https://repo1.maven.org/maven2/com/github/pjfanning/excel-streaming-reader/4.0.1/excel-streaming-reader-4.0.1.jar ...
[SUCCESSFUL ] com.github.pjfanning#excel-streaming-reader;4.0.1!excel-streaming-reader.jar (331ms)
downloading https://repo1.maven.org/maven2/com/github/pjfanning/poi-shared-strings/2.5.3/poi-shared-strings-2.5.3.jar ...
[SUCCESSFUL ] com.github.pjfanning#poi-shared-strings;2.5.3!poi-shared-strings.jar (317ms)
downloading https://repo1.maven.org/maven2/commons-io/commons-io/2.11.0/commons-io-2.11.0.jar ...
[SUCCESSFUL ] commons-io#commons-io;2.11.0!commons-io.jar (331ms)
downloading https://repo1.maven.org/maven2/org/apache/commons/commons-compress/1.21/commons-compress-1.21.jar ...
[SUCCESSFUL ] org.apache.commons#commons-compress;1.21!commons-compress.jar (478ms)
downloading https://repo1.maven.org/maven2/org/apache/logging/log4j/log4j-api/2.17.2/log4j-api-2.17.2.jar ...
[SUCCESSFUL ] org.apache.logging.log4j#log4j-api;2.17.2!log4j-api.jar (367ms)
downloading https://repo1.maven.org/maven2/com/zaxxer/SparseBitSet/1.2/SparseBitSet-1.2.jar ...
[SUCCESSFUL ] com.zaxxer#SparseBitSet;1.2!SparseBitSet.jar (297ms)
downloading https://repo1.maven.org/maven2/org/apache/commons/commons-collections4/4.4/commons-collections4-4.4.jar ...
[SUCCESSFUL ] org.apache.commons#commons-collections4;4.4!commons-collections4.jar (452ms)
downloading https://repo1.maven.org/maven2/com/github/virtuald/curvesapi/1.07/curvesapi-1.07.jar ...
[SUCCESSFUL ] com.github.virtuald#curvesapi;1.07!curvesapi.jar (335ms)
downloading https://repo1.maven.org/maven2/commons-codec/commons-codec/1.15/commons-codec-1.15.jar ...
[SUCCESSFUL ] commons-codec#commons-codec;1.15!commons-codec.jar (372ms)
downloading https://repo1.maven.org/maven2/org/apache/commons/commons-math3/3.6.1/commons-math3-3.6.1.jar ...
[SUCCESSFUL ] org.apache.commons#commons-math3;3.6.1!commons-math3.jar (744ms)
downloading https://repo1.maven.org/maven2/org/scala-lang/modules/scala-collection-compat_2.12/2.8.1/scala-collection-compat_2.12-2.8.1.jar ...
[SUCCESSFUL ] org.scala-lang.modules#scala-collection-compat_2.12;2.8.1!scala-collection-compat_2.12.jar (371ms)
downloading https://repo1.maven.org/maven2/com/github/tototoshi/scala-csv_2.12/1.3.10/scala-csv_2.12-1.3.10.jar ...
[SUCCESSFUL ] com.github.tototoshi#scala-csv_2.12;1.3.10!scala-csv_2.12.jar (295ms)
downloading https://repo1.maven.org/maven2/org/slf4j/slf4j-api/1.7.36/slf4j-api-1.7.36.jar ...
[SUCCESSFUL ] org.slf4j#slf4j-api;1.7.36!slf4j-api.jar (285ms)
downloading https://repo1.maven.org/maven2/com/h2database/h2/2.1.212/h2-2.1.212.jar ...
[SUCCESSFUL ] com.h2database#h2;2.1.212!h2.jar (853ms)
downloading https://repo1.maven.org/maven2/org/apache/commons/commons-text/1.9/commons-text-1.9.jar ...
[SUCCESSFUL ] org.apache.commons#commons-text;1.9!commons-text.jar (340ms)
downloading https://repo1.maven.org/maven2/org/apache/commons/commons-lang3/3.11/commons-lang3-3.11.jar ...
[SUCCESSFUL ] org.apache.commons#commons-lang3;3.11!commons-lang3.jar (414ms)
:: resolution report :: resolve 24141ms :: artifacts dl 14647ms
:: modules in use:
com.crealytics#spark-excel_2.12;3.2.2_0.18.0 from central in [default]
com.github.pjfanning#excel-streaming-reader;4.0.1 from central in [default]
com.github.pjfanning#poi-shared-strings;2.5.3 from central in [default]
com.github.tototoshi#scala-csv_2.12;1.3.10 from central in [default]
com.github.virtuald#curvesapi;1.07 from central in [default]
com.h2database#h2;2.1.212 from central in [default]
com.norbitltd#spoiwo_2.12;2.2.1 from central in [default]
com.zaxxer#SparseBitSet;1.2 from central in [default]
commons-codec#commons-codec;1.15 from central in [default]
commons-io#commons-io;2.11.0 from central in [default]
org.apache.commons#commons-collections4;4.4 from central in [default]
org.apache.commons#commons-compress;1.21 from central in [default]
org.apache.commons#commons-lang3;3.11 from central in [default]
org.apache.commons#commons-math3;3.6.1 from central in [default]
org.apache.commons#commons-text;1.9 from central in [default]
org.apache.logging.log4j#log4j-api;2.17.2 from central in [default]
org.apache.poi#poi;5.2.2 from central in [default]
org.apache.poi#poi-ooxml;5.2.2 from central in [default]
org.apache.poi#poi-ooxml-lite;5.2.2 from central in [default]
org.apache.xmlbeans#xmlbeans;5.0.3 from central in [default]
org.scala-lang.modules#scala-collection-compat_2.12;2.8.1 from central in [default]
org.slf4j#slf4j-api;1.7.36 from central in [default]
:: evicted modules:
org.apache.logging.log4j#log4j-api;2.17.1 by [org.apache.logging.log4j#log4j-api;2.17.2] in [default]
org.apache.poi#poi;5.2.1 by [org.apache.poi#poi;5.2.2] in [default]
org.apache.poi#poi-ooxml;5.2.1 by [org.apache.poi#poi-ooxml;5.2.2] in [default]
---------------------------------------------------------------------
| | modules || artifacts |
| conf | number| search|dwnlded|evicted|| number|dwnlded|
---------------------------------------------------------------------
| default | 25 | 22 | 22 | 3 || 22 | 22 |
---------------------------------------------------------------------
:: retrieving :: org.apache.spark#spark-submit-parent-13eb0764-e97a-46aa-93c3-386249b15f8b
confs: [default]
22 artifacts copied, 0 already retrieved (40428kB/101ms)
23/01/26 20:28:56 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
23/01/26 20:29:05 WARN Utils: Service 'SparkUI' could not bind on port 4040. Attempting port 4041.
Spark context Web UI available at http://104c93401bff:4041
Spark context available as 'sc' (master = local[*], app id = local-1674764945256).
Spark session available as 'spark'.
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/___/ .__/\_,_/_/ /_/\_\ version 3.3.1
/_/
Using Scala version 2.12.15 (OpenJDK 64-Bit Server VM, Java 17.0.5)
Type in expressions to have them evaluated.
Type :help for more information.
scala>
Py4JJavaError: An error occurred while calling o37.load.
: java.lang.ClassNotFoundException:
Failed to find data source: com.crealytics.spark. Please find packages at
https://spark.apache.org/third-party-projects.html
at org.apache.spark.sql.errors.QueryExecutionErrors$.failedToFindDataSourceError(QueryExecutionErrors.scala:587)
at org.apache.spark.sql.execution.datasources.DataSource$.lookupDataSource(DataSource.scala:675)
at org.apache.spark.sql.execution.datasources.DataSource$.lookupDataSourceV2(DataSource.scala:725)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:207)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:185)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:568)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:182)
at py4j.ClientServerConnection.run(ClientServerConnection.java:106)
at java.base/java.lang.Thread.run(Thread.java:833)
Caused by: java.lang.ClassNotFoundException: com.crealytics.spark.DefaultSource
at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:445)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:587)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:520)
at org.apache.spark.sql.execution.datasources.DataSource$.$anonfun$lookupDataSource$5(DataSource.scala:661)
at scala.util.Try$.apply(Try.scala:213)
at org.apache.spark.sql.execution.datasources.DataSource$.$anonfun$lookupDataSource$4(DataSource.scala:661)
at scala.util.Failure.orElse(Try.scala:224)
at org.apache.spark.sql.execution.datasources.DataSource$.lookupDataSource(DataSource.scala:661)
... 15 more
Related
I would like to setup my .NETCore microservice running via Dotnet/Tye project with local ELK docker.
I followed next guidelines:
https://github.com/dotnet/tye/blob/main/docs/recipes/logging_elastic.md and set this to my tye.yaml:
extensions:
- name: elastic
logPath: ./.logs
And running this project with
tye run --watch --logs elastic=http://localhost:9200
For some reason I'm not getting any indexes in Kibana configuration.
Update 1
Attached Elasticsearch logs.
Also I would like to say that since I'm using Mac M1 I needed to compile sebp/elk image for arm64 locally (according official docs).
chown: changing ownership of '/var/lib/elasticsearch/indices/Ii0QK5l8QLi_yzEVLBZyyg/_state': Permission denied
chown: changing ownership of '/var/lib/elasticsearch/indices/Ii0QK5l8QLi_yzEVLBZyyg/0/_state/retention-leases-0.st': Permission denied
chown: changing ownership of '/var/lib/elasticsearch/indices/Ii0QK5l8QLi_yzEVLBZyyg/0/_state/state-4.st': Permission denied
chown: changing ownership of '/var/lib/elasticsearch/indices/Ii0QK5l8QLi_yzEVLBZyyg/0/_state': Permission denied
chown: changing ownership of '/var/lib/elasticsearch/indices/Ii0QK5l8QLi_yzEVLBZyyg/0/translog/translog.ckp': Permission denied
chown: changing ownership of '/var/lib/elasticsearch/indices/Ii0QK5l8QLi_yzEVLBZyyg/0/translog/translog-6.tlog': Permission denied
chown: changing ownership of '/var/lib/elasticsearch/indices/Ii0QK5l8QLi_yzEVLBZyyg/0/translog': Permission denied
chown: changing ownership of '/var/lib/elasticsearch/indices/Ii0QK5l8QLi_yzEVLBZyyg/0/index/segments_2': Permission denied
chown: changing ownership of '/var/lib/elasticsearch/indices/Ii0QK5l8QLi_yzEVLBZyyg/0/index/write.lock': Permission denied
chown: changing ownership of '/var/lib/elasticsearch/indices/Ii0QK5l8QLi_yzEVLBZyyg/0/index': Permission denied
chown: changing ownership of '/var/lib/elasticsearch/indices/Ii0QK5l8QLi_yzEVLBZyyg/0': Permission denied
chown: changing ownership of '/var/lib/elasticsearch/indices/Ii0QK5l8QLi_yzEVLBZyyg': Permission denied
chown: changing ownership of '/var/lib/elasticsearch/indices/BpgSZOOjQEGvmc8CikJ7JQ/_state': Permission denied
chown: changing ownership of '/var/lib/elasticsearch/indices/BpgSZOOjQEGvmc8CikJ7JQ/0/_state': Permission denied
chown: changing ownership of '/var/lib/elasticsearch/indices/BpgSZOOjQEGvmc8CikJ7JQ/0/translog': Permission denied
chown: changing ownership of '/var/lib/elasticsearch/indices/BpgSZOOjQEGvmc8CikJ7JQ/0/index': Permission denied
chown: changing ownership of '/var/lib/elasticsearch/indices/BpgSZOOjQEGvmc8CikJ7JQ/0': Permission denied
chown: changing ownership of '/var/lib/elasticsearch/indices/BpgSZOOjQEGvmc8CikJ7JQ': Permission denied
chown: changing ownership of '/var/lib/elasticsearch/indices': Permission denied
chown: changing ownership of '/var/lib/elasticsearch/nodes': Permission denied
chown: changing ownership of '/var/lib/elasticsearch/snapshot_cache/segments_6': Permission denied
chown: changing ownership of '/var/lib/elasticsearch/snapshot_cache/write.lock': Permission denied
chown: changing ownership of '/var/lib/elasticsearch/snapshot_cache': Permission denied
chown: changing ownership of '/var/lib/elasticsearch': Permission denied
* Starting Elasticsearch Server
...done.
waiting for Elasticsearch to be up (1/30)
waiting for Elasticsearch to be up (2/30)
waiting for Elasticsearch to be up (3/30)
waiting for Elasticsearch to be up (4/30)
waiting for Elasticsearch to be up (5/30)
waiting for Elasticsearch to be up (6/30)
waiting for Elasticsearch to be up (7/30)
waiting for Elasticsearch to be up (8/30)
waiting for Elasticsearch to be up (9/30)
waiting for Elasticsearch to be up (10/30)
Waiting for Elasticsearch cluster to respond (1/30)
logstash started.
* Starting Kibana5
...done.
==> /var/log/elasticsearch/elasticsearch.log <==
[2022-08-13T20:02:23,121][INFO ][o.e.b.BootstrapChecks ] [elk] bound or publishing to a non-loopback address, enforcing bootstrap checks
[2022-08-13T20:02:23,122][INFO ][o.e.c.c.Coordinator ] [elk] cluster UUID [BCwoAnKSQYqAe1XVWAkKQg]
[2022-08-13T20:02:23,243][INFO ][o.e.c.s.MasterService ] [elk] elected-as-master ([1] nodes joined)[{elk}{xL-uMsG2RD2KxDzGf8SGhw}{5WXUD0v3Txe1UtMW0ws07A}{192.168.208.2}{192.168.208.2:9300}{cdfhilmrstw} completing election, _BECOME_MASTER_TASK_, _FINISH_ELECTION_], term: 7, version: 169, delta: master node changed {previous [], current [{elk}{xL-uMsG2RD2KxDzGf8SGhw}{5WXUD0v3Txe1UtMW0ws07A}{192.168.208.2}{192.168.208.2:9300}{cdfhilmrstw}]}
[2022-08-13T20:02:23,352][INFO ][o.e.c.s.ClusterApplierService] [elk] master node changed {previous [], current [{elk}{xL-uMsG2RD2KxDzGf8SGhw}{5WXUD0v3Txe1UtMW0ws07A}{192.168.208.2}{192.168.208.2:9300}{cdfhilmrstw}]}, term: 7, version: 169, reason: Publication{term=7, version=169}
[2022-08-13T20:02:23,385][INFO ][o.e.h.AbstractHttpServerTransport] [elk] publish_address {192.168.208.2:9200}, bound_addresses {0.0.0.0:9200}
[2022-08-13T20:02:23,385][INFO ][o.e.n.Node ] [elk] started
[2022-08-13T20:02:23,566][WARN ][o.e.x.s.i.SetSecurityUserProcessor] [elk] Creating processor [set_security_user] (tag [null]) on field [_security] but authentication is not currently enabled on this cluster - this processor is likely to fail at runtime if it is used
[2022-08-13T20:02:23,683][INFO ][o.e.l.LicenseService ] [elk] license [2cc9ac6a-c421-4021-aa2d-daa12f2e2d0a] mode [basic] - valid
[2022-08-13T20:02:23,685][INFO ][o.e.g.GatewayService ] [elk] recovered [8] indices into cluster_state
[2022-08-13T20:02:25,510][INFO ][o.e.c.r.a.AllocationService] [elk] current.health="GREEN" message="Cluster health status changed from [RED] to [GREEN] (reason: [shards started [[.geoip_databases][0]]])." previous.health="RED" reason="shards started [[.geoip_databases][0]]"
==> /var/log/logstash/logstash-plain.log <==
==> /var/log/kibana/kibana5.log <==
==> /var/log/elasticsearch/elasticsearch.log <==
[2022-08-13T20:02:25,940][INFO ][o.e.i.g.DatabaseNodeService] [elk] successfully loaded geoip database file [GeoLite2-Country.mmdb]
[2022-08-13T20:02:26,012][INFO ][o.e.i.g.DatabaseNodeService] [elk] successfully loaded geoip database file [GeoLite2-ASN.mmdb]
[2022-08-13T20:02:26,934][INFO ][o.e.i.g.DatabaseNodeService] [elk] successfully loaded geoip database file [GeoLite2-City.mmdb]
==> /var/log/logstash/logstash-plain.log <==
[2022-08-13T20:02:39,552][INFO ][logstash.runner ] Log4j configuration path used is: /opt/logstash/config/log4j2.properties
[2022-08-13T20:02:39,561][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"8.1.0", "jruby.version"=>"jruby 9.2.20.1 (2.5.8) 2021-11-30 2a2962fbd1 OpenJDK 64-Bit Server VM 11.0.13+8 on 11.0.13+8 +indy +jit [linux-aarch64]"}
[2022-08-13T20:02:39,562][INFO ][logstash.runner ] JVM bootstrap flags: [-Xms1g, -Xmx1g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djruby.compile.invokedynamic=true, -Djruby.jit.threshold=0, -Djruby.regexp.interruptible=true, -XX:+HeapDumpOnOutOfMemoryError, -Djava.security.egd=file:/dev/urandom, -Dlog4j2.isThreadContextMapInheritable=true, --add-opens=java.base/java.security=ALL-UNNAMED, --add-opens=java.base/java.io=ALL-UNNAMED, --add-opens=java.base/java.nio.channels=ALL-UNNAMED, --add-opens=java.base/sun.nio.ch=ALL-UNNAMED, --add-opens=java.management/sun.management=ALL-UNNAMED, -Djava.io.tmpdir=/opt/logstash]
[2022-08-13T20:02:39,577][INFO ][logstash.settings ] Creating directory {:setting=>"path.queue", :path=>"/opt/logstash/data/queue"}
[2022-08-13T20:02:39,584][INFO ][logstash.settings ] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/opt/logstash/data/dead_letter_queue"}
[2022-08-13T20:02:39,813][INFO ][logstash.agent ] No persistent UUID file found. Generating new UUID {:uuid=>"7710d596-7725-4b7c-b1c6-371f4360a636", :path=>"/opt/logstash/data/uuid"}
==> /var/log/elasticsearch/elasticsearch.log <==
[2022-08-13T20:02:41,002][INFO ][o.e.t.LoggingTaskListener] [elk] 190 finished with response BulkByScrollResponse[took=331.9ms,timed_out=false,sliceId=null,updated=19,created=0,deleted=0,batches=1,versionConflicts=0,noops=0,retries=0,throttledUntil=0s,bulk_failures=[],search_failures=[]]
[2022-08-13T20:02:41,034][INFO ][o.e.t.LoggingTaskListener] [elk] 184 finished with response BulkByScrollResponse[took=462.3ms,timed_out=false,sliceId=null,updated=11,created=0,deleted=0,batches=1,versionConflicts=0,noops=0,retries=0,throttledUntil=0s,bulk_failures=[],search_failures=[]]
==> /var/log/logstash/logstash-plain.log <==
[2022-08-13T20:02:41,833][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600, :ssl_enabled=>false}
[2022-08-13T20:02:43,307][INFO ][org.reflections.Reflections] Reflections took 181 ms to scan 1 urls, producing 120 keys and 417 values
[2022-08-13T20:02:43,821][INFO ][logstash.javapipeline ] Pipeline `main` is configured with `pipeline.ecs_compatibility: v8` setting. All plugins in this pipeline will default to `ecs_compatibility => v8` unless explicitly configured otherwise.
[2022-08-13T20:02:43,855][INFO ][logstash.outputs.elasticsearch][main] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//localhost"]}
[2022-08-13T20:02:44,105][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
[2022-08-13T20:02:44,281][WARN ][logstash.outputs.elasticsearch][main] Restored connection to ES instance {:url=>"http://localhost:9200/"}
[2022-08-13T20:02:44,292][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch version determined (8.1.0) {:es_version=>8}
[2022-08-13T20:02:44,294][WARN ][logstash.outputs.elasticsearch][main] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>8}
[2022-08-13T20:02:44,373][INFO ][logstash.outputs.elasticsearch][main] Config is not compliant with data streams. `data_stream => auto` resolved to `false`
[2022-08-13T20:02:44,381][INFO ][logstash.outputs.elasticsearch][main] Config is not compliant with data streams. `data_stream => auto` resolved to `false`
[2022-08-13T20:02:44,382][WARN ][logstash.outputs.elasticsearch][main] Elasticsearch Output configured with `ecs_compatibility => v8`, which resolved to an UNRELEASED preview of version 8.0.0 of the Elastic Common Schema. Once ECS v8 and an updated release of this plugin are publicly available, you will need to update this plugin to resolve this warning.
[2022-08-13T20:02:44,437][WARN ][logstash.filters.grok ][main] ECS v8 support is a preview of the unreleased ECS v8, and uses the v1 patterns. When Version 8 of the Elastic Common Schema becomes available, this plugin will need to be updated
[2022-08-13T20:02:44,530][WARN ][logstash.filters.grok ][main] ECS v8 support is a preview of the unreleased ECS v8, and uses the v1 patterns. When Version 8 of the Elastic Common Schema becomes available, this plugin will need to be updated
[2022-08-13T20:02:44,594][INFO ][logstash.javapipeline ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>500, "pipeline.sources"=>["/etc/logstash/conf.d/02-beats-input.conf", "/etc/logstash/conf.d/10-syslog.conf", "/etc/logstash/conf.d/11-nginx.conf", "/etc/logstash/conf.d/30-output.conf"], :thread=>"#<Thread:0x2926b0a run>"}
[2022-08-13T20:02:45,171][INFO ][logstash.javapipeline ][main] Pipeline Java execution initialization time {"seconds"=>0.57}
[2022-08-13T20:02:45,184][INFO ][logstash.inputs.beats ][main] Starting input listener {:address=>"0.0.0.0:5044"}
[2022-08-13T20:02:45,213][INFO ][logstash.javapipeline ][main] Pipeline started {"pipeline.id"=>"main"}
[2022-08-13T20:02:45,313][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2022-08-13T20:02:45,315][INFO ][org.logstash.beats.Server][main][829cd21b7fbde9c57f6074e54675a6dd14081ec403bdd5ea935fd37106249341] Starting server on port: 5044
==> /var/log/elasticsearch/elasticsearch.log <==
[2022-08-13T20:02:52,829][INFO ][o.e.c.m.MetadataMappingService] [elk] [.kibana_8.1.0_001/67wIFep3T4qg_Cn32rVbgg] update_mapping [_doc]
[2022-08-13T20:02:54,059][INFO ][o.e.c.m.MetadataMappingService] [elk] [.kibana_8.1.0_001/67wIFep3T4qg_Cn32rVbgg] update_mapping [_doc]
[2022-08-13T20:03:54,361][WARN ][o.e.x.s.i.SetSecurityUserProcessor] [elk] Creating processor [set_security_user] (tag [null]) on field [_security] but authentication is not currently enabled on this cluster - this processor is likely to fail at runtime if it is used
[2022-08-13T20:03:54,380][INFO ][o.e.c.m.MetadataMappingService] [elk] [.kibana_8.1.0_001/67wIFep3T4qg_Cn32rVbgg] update_mapping [_doc]
[2022-08-13T20:05:09,475][INFO ][o.e.c.m.MetadataMappingService] [elk] [.kibana_8.1.0_001/67wIFep3T4qg_Cn32rVbgg] update_mapping [_doc]
I'm trying to start the docker container. I am using docker-elk.yml file to generate ELK containers.Elasticsearch and Kibana Containers are working fine.But for Logstash,it is starting and going into container bash but after sometimes it stops automatically.
Logs of container:
[2019-04-11T08:48:26,882][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"6.6.0"}
[2019-04-11T08:48:33,497][INFO ][logstash.pipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[2019-04-11T08:48:34,062][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch:9200/]}}
[2019-04-11T08:48:34,310][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://elasticsearch:9200/"}
[2019-04-11T08:48:34,409][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
[2019-04-11T08:48:34,415][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the type event field won't be used to determine the document _type {:es_version=>6}
[2019-04-11T08:48:34,469][INFO ][logstash.outputs.elasticsearch]
New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//elasticsearch:9200"]}
[2019-04-11T08:48:34,486][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2019-04-11T08:48:34,503][INFO ][logstash.outputs.elasticsearch]
Attempting to install template {:manage_template=>{"template"=>"logstash-", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"#timestamp"=>{"type"=>"date"}, "#version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2019-04-11T08:48:34,960][INFO ][logstash.inputs.beats ] Beats inputs: Starting input listener {:address=>"0.0.0.0:5000"}
[2019-04-11T08:48:34,985][INFO ][logstash.pipeline ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#"}
[2019-04-11T08:48:35,077][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2019-04-11T08:48:35,144][INFO ][org.logstash.beats.Server] Starting server on port: 5000
[2019-04-11T08:48:35,499][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2019-04-11T08:48:50,591][INFO ][logstash.outputs.file ] Opening file {:path=>"/usr/share/logstash/output.log"}
[2019-04-11T13:16:51,947][WARN ][logstash.runner ] SIGTERM received. Shutting down.
[2019-04-11T13:16:56,498][INFO ][logstash.pipeline ] Pipeline has terminated {:pipeline_id=>"main", :thread=>"#"}
Does it try to require a relative path? That's been removed in Ruby 1.9.
uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/rubygems/core_ext/kernel_require.rb:59:in `require':
It seems your ruby installation is missing psych (for YAML output).
To eliminate this warning, please install libyaml and reinstall your ruby.
[ERROR] 2019-04-11 14:18:02.058 [main] Logstash - java.lang.IllegalStateException: Logstash stopped processing because of an error:
(GemspecError) There was a LoadError while loading logstash-core.gemspec:
load error: psych -- java.lang.RuntimeException: BUG: we can not copy embedded jar to temp directoryoes it try to require a relative path? That's been removed in Ruby 1.9.
uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/rubygems/core_ext/kernel_require.rb:59:in `require':
It seems your ruby installation is missing psych (for YAML output).
To eliminate this warning, please install libyaml and reinstall your ruby.
[ERROR] 2019-04-11 13:42:01.450 [main]
Logstash - java.lang.IllegalStateException: Logstash stopped processing because of an error: (GemspecError) There was a LoadError while loading logstash-core.gemspec: load error: psych -- java.lang.RuntimeException: BUG: we can not copy embedded jar to temp directory
I have tried to remove tmp folder in container.But it is not working.
I see all my executors frequently changing to Dead state in one of my Jenkins slave machine(Windows 2008 R2 SP2).
Jenkins ver. 1.651.3
I have restarted Jenkins server as well as the service.
error logs-
Unexpected executor death
java.io.IOException: Failed to create a temporary file in /var/lib/jenkins/jobs/ABCD/jobs/EFGH/jobs/Build
at hudson.util.AtomicFileWriter.<init>(AtomicFileWriter.java:68)
at hudson.util.AtomicFileWriter.<init>(AtomicFileWriter.java:55)
at hudson.util.TextFile.write(TextFile.java:118)
at hudson.model.Job.saveNextBuildNumber(Job.java:293)
at hudson.model.Job.assignBuildNumber(Job.java:351)
at hudson.model.Run.<init>(Run.java:284)
at hudson.model.AbstractBuild.<init>(AbstractBuild.java:167)
at hudson.model.Build.<init>(Build.java:92)
at hudson.model.FreeStyleBuild.<init>(FreeStyleBuild.java:34)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at jenkins.model.lazy.LazyBuildMixIn.newBuild(LazyBuildMixIn.java:175)
at hudson.model.AbstractProject.newBuild(AbstractProject.java:1018)
at hudson.model.AbstractProject.createExecutable(AbstractProject.java:1209)
at hudson.model.AbstractProject.createExecutable(AbstractProject.java:144)
at hudson.model.Executor$1.call(Executor.java:364)
at hudson.model.Executor$1.call(Executor.java:346)
at hudson.model.Queue._withLock(Queue.java:1365)
at hudson.model.Queue.withLock(Queue.java:1230)
at hudson.model.Executor.run(Executor.java:346)
Caused by: java.io.IOException: Permission denied
at java.io.UnixFileSystem.createFileExclusively(Native Method)
at java.io.File.createNewFile(File.java:1006)
at java.io.File.createTempFile(File.java:1989)
at hudson.util.AtomicFileWriter.<init>(AtomicFileWriter.java:66)
... 21 more
I see this error log in my slave machine
INFO: File download attempt 1
Oct 17, 2017 10:32:00 AM com.microsoft.tfs.core.clients.versioncontrol.VersionControlClient downloadFileToStreams
INFO: File download attempt 1
Oct 17, 2017 10:32:00 AM com.microsoft.tfs.core.ws.runtime.client.SOAPService executeSOAPRequestInternal
INFO: SOAP method='UpdateLocalVersion', status=200, content-length=367, server-wait=402 ms, parse=0 ms, total=402 ms, throughput=913 B/s, gzip
Oct 17, 2017 10:32:00 AM com.microsoft.tfs.core.clients.versioncontrol.VersionControlClient downloadFileToStreams
INFO: File download attempt 1
Oct 17, 2017 10:32:00 AM com.microsoft.tfs.core.clients.versioncontrol.VersionControlClient downloadFileToStreams
INFO: File download attempt 1
Oct 17, 2017 10:32:00 AM com.microsoft.tfs.core.clients.versioncontrol.VersionControlClient downloadFileToStreams
INFO: File download attempt 1
Can you please check the owner of the path /var/lib/jenkins/jobs/ABCD/jobs/EFGH/jobs/Build ? By any chance if it is created manually, you will get permission denied error if the owner is not Jenkins. Also check for free disk space on server as well as agent and try rebooting the slave agent. It has helped it at times.
How long are the real job names for ABCD and EFGH?
I've run into the 260 character maximum path length with Jenkins on Windows 2008 R2 before.
The path in:
java.io.IOException: Failed to create a temporary file in /var/lib/jenkins/jobs/ABCD/jobs/EFGH/jobs/Build
with the three /jobs in it seems strange to me. In Jenkins it normally should rather be:
+- /var/lib/jenkins/jobs
+- ABCD
| +- builds
| | +- ...
| +- ...
+- EFGH
| +- builds
| | +- ...
| +- ...
+- Build
+- builds
| +- ...
+- ...
Maybe there's some misconfiguration concerning paths and Jenkins tries a mkdir /var/lib/jenkins/jobs/ABCD/jobs/EFGH/jobs/Build and the Jenkins user or the user under which the job runs doesn't have permissions to do that.
See also File permissions and attributes:
| w | ... | The directory's contents can be modified (create new files or folders; [...]); requires the execute permission to be also set, otherwise this permission has no effect. |
In my situation, this happened because the server was very low on space. Click on "Build Executor Status" from the dashboard and see if there is low disk space or 0 swap space. Try to free up some space. Then restart the Jenkins server / service and try again.
I imported a grails project using Intellij idea. Now when i tried to run-app the project it shows the following error.
|Configuring classpath
Error |
Resolve error obtaining dependencies: Failed to read artifact descriptor for xalan:serializer:jar:2.7.1 (Use --stacktrace to see the full trace)
Error |
Required Grails build dependencies were not found. This is normally due to internet connectivity issues (such as a misconfigured proxy) or missing repositories in grails-app/conf/BuildConfig.groovy. Please verify your configuration to continue.
Note I am using grails 2.3.8 version.
--Stacktrace results the following message
Configuring classpath
Error |
Resolve error obtaining dependencies: Failed to read artifact descriptor for xalan:serializer:jar:2.7.1 (NOTE: Stack trace has been filtered. Use --verbose to see entire trace.)
org.eclipse.aether.resolution.ArtifactDescriptorException: Failed to read artifact descriptor for xalan:serializer:jar:2.7.1
at org.apache.maven.repository.internal.DefaultArtifactDescriptorReader.loadPom(DefaultArtifactDescriptorReader.java:335)
at org.apache.maven.repository.internal.DefaultArtifactDescriptorReader.readArtifactDescriptor(DefaultArtifactDescriptorReader.java:217)
at org.eclipse.aether.internal.impl.DefaultDependencyCollector.process(DefaultDependencyCollector.java:466)
at org.eclipse.aether.internal.impl.DefaultDependencyCollector.collectDependencies(DefaultDependencyCollector.java:261)
at org.eclipse.aether.internal.impl.DefaultRepositorySystem.collectDependencies(DefaultRepositorySystem.java:317)
at grails.util.BuildSettings.doResolve(BuildSettings.groovy:513)
at grails.util.BuildSettings.doResolve(BuildSettings.groovy)
at grails.util.BuildSettings$_getDefaultBuildDependencies_closure17.doCall(BuildSettings.groovy:774)
at grails.util.BuildSettings$_getDefaultBuildDependencies_closure17.doCall(BuildSettings.groovy)
at grails.util.BuildSettings.getDefaultBuildDependencies(BuildSettings.groovy:768)
at grails.util.BuildSettings.getBuildDependencies(BuildSettings.groovy:673)
Caused by: org.eclipse.aether.resolution.ArtifactResolutionException: Could not transfer artifact xalan:serializer:pom:2.7.1 from/to grailsCentral (http://repo.grails.org/grails/plugins): repo.grails.org
at org.eclipse.aether.internal.impl.DefaultArtifactResolver.resolve(DefaultArtifactResolver.java:460)
at org.eclipse.aether.internal.impl.DefaultArtifactResolver.resolveArtifacts(DefaultArtifactResolver.java:262)
at org.eclipse.aether.internal.impl.DefaultArtifactResolver.resolveArtifact(DefaultArtifactResolver.java:239)
at org.apache.maven.repository.internal.DefaultArtifactDescriptorReader.loadPom(DefaultArtifactDescriptorReader.java:320)
... 10 more
Caused by: org.eclipse.aether.transfer.ArtifactTransferException: Could not transfer artifact xalan:serializer:pom:2.7.1 from/to grailsCentral (http://repo.grails.org/grails/plugins): repo.grails.org
at org.eclipse.aether.connector.basic.ArtifactTransportListener.transferFailed(ArtifactTransportListener.java:43)
at org.eclipse.aether.connector.basic.BasicRepositoryConnector$TaskRunner.run(BasicRepositoryConnector.java:342)
at org.eclipse.aether.util.concurrency.RunnableErrorForwarder$1.run(RunnableErrorForwarder.java:67)
at org.eclipse.aether.connector.basic.BasicRepositoryConnector$DirectExecutor.execute(BasicRepositoryConnector.java:649)
at org.eclipse.aether.connector.basic.BasicRepositoryConnector.get(BasicRepositoryConnector.java:247)
at org.eclipse.aether.internal.impl.DefaultArtifactResolver.performDownloads(DefaultArtifactResolver.java:536)
at org.eclipse.aether.internal.impl.DefaultArtifactResolver.resolve(DefaultArtifactResolver.java:437)
... 13 more
Caused by: java.net.UnknownHostException: repo.grails.org
at org.apache.http.impl.conn.SystemDefaultDnsResolver.resolve(SystemDefaultDnsResolver.java:45)
at org.apache.http.impl.conn.DefaultClientConnectionOperator.resolveHostname(DefaultClientConnectionOperator.java:278)
at org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:162)
at org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:294)
at org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:643)
at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:479)
at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906)
at org.apache.http.impl.client.DecompressingHttpClient.execute(DecompressingHttpClient.java:137)
at org.eclipse.aether.transport.http.HttpTransporter.execute(HttpTransporter.java:294)
at org.eclipse.aether.transport.http.HttpTransporter.implGet(HttpTransporter.java:250)
at org.eclipse.aether.spi.connector.transport.AbstractTransporter.get(AbstractTransporter.java:59)
at org.eclipse.aether.connector.basic.BasicRepositoryConnector$GetTaskRunner.runTask(BasicRepositoryConnector.java:418)
at org.eclipse.aether.connector.basic.BasicRepositoryConnector$TaskRunner.run(BasicRepositoryConnector.java:337)
... 18 more
Error |
Resolve error obtaining dependencies: Failed to read artifact descriptor for xalan:serializer:jar:2.7.1
Error |
Required Grails build dependencies were not found. This is normally due to internet connectivity issues (such as a misconfigured proxy) or missing repositories in grails-app/conf/BuildConfig.groovy. Please verify your configuration to continue.
I'd say java.net.UnknownHostException: repo.grails.org is the core issue. Do you need to configure a proxy? See the docs for the add-proxy command.
When I run RAILS_ENV=production rake sunspot:solr:run, Solr starts as expected and the log looks something like this:
0 INFO (main) [ ] o.e.j.u.log Logging initialized #613ms
355 INFO (main) [ ] o.e.j.s.Server jetty-9.2.11.v20150529
380 WARN (main) [ ] o.e.j.s.h.RequestLogHandler !RequestLog
383 INFO (main) [ ] o.e.j.d.p.ScanningAppProvider Deployment monitor [file:/usr/local/rvm/gems/ruby-2.2.1/gems/sunspot_solr- 2.2.3/solr/server/contexts/] at interval 0
1392 INFO (main) [ ] o.e.j.w.StandardDescriptorProcessor NO JSP Support for /solr, did not find org.apache.jasper.servlet.JspServlet
1437 WARN (main) [ ] o.e.j.s.SecurityHandler ServletContext#o.e.j.w.WebAppContext#1f3ff9d4{/solr,file:/usr/local/rvm/gems/ruby-2.2.1/gems/sunspot_solr-2.2.3/solr/server/solr-webapp/webapp/,STARTING}{/usr/local/rvm/gems/ruby-2.2.1/gems/sunspot_solr-2.2.3/solr/server/solr-webapp/webapp} has uncovered http methods for path: /
1456 INFO (main) [ ] o.a.s.s.SolrDispatchFilter SolrDispatchFilter.init(): WebAppClassLoader=879601585#346da7b1
1487 INFO (main) [ ] o.a.s.c.SolrResourceLoader JNDI not configured for solr (NoInitialContextEx)
1488 INFO (main) [ ] o.a.s.c.SolrResourceLoader using system property solr.solr.home: /usr/local/rvm/gems/ruby-2.2.1/gems/sunspot_solr-2.2.3/solr/server/solr
1491 INFO (main) [ ] o.a.s.c.SolrResourceLoader new SolrResourceLoader for directory: '/usr/local/rvm/gems/ruby-2.2.1/gems/sunspot_solr-2.2.3/solr/server/solr/'
1738 INFO (main) [ ] o.a.s.c.SolrXmlConfig Loading container configuration from /usr/local/rvm/gems/ruby-2.2.1/gems/sunspot_solr-2.2.3/solr/server/solr/solr.xml
1848 INFO (main) [ ] o.a.s.c.CoresLocator Config-defined core root directory: /usr/local/rvm/gems/ruby-2.2.1/gems/sunspot_solr-2.2.3/solr/server/solr
1882 INFO (main) [ ] o.a.s.c.CoreContainer New CoreContainer 394200281
1882 INFO (main) [ ] o.a.s.c.CoreContainer Loading cores into CoreContainer [instanceDir=/usr/local/rvm/gems/ruby-2.2.1/gems/sunspot_solr-2.2.3/solr/server/solr/]
1883 INFO (main) [ ] o.a.s.c.CoreContainer loading shared library: /usr/local/rvm/gems/ruby-2.2.1/gems/sunspot_solr-2.2.3/solr/server/solr/lib
1883 WARN (main) [ ] o.a.s.c.SolrResourceLoader Can't find (or read) directory to add to classloader: lib (resolved as: /usr/local/rvm/gems/ruby-2.2.1/gems/sunspot_solr-2.2.3/solr/server/solr/lib).
1905 INFO (main) [ ] o.a.s.h.c.HttpShardHandlerFactory created with socketTimeout : 600000,connTimeout : 60000,maxConnectionsPerHost : 20,maxConnections : 10000,corePoolSize : 0,maximumPoolSize : 2147483647,maxThreadIdleTime : 5,sizeOfQueue : -1,fairnessPolicy : false,useRetries : false,
2333 INFO (main) [ ] o.a.s.u.UpdateShardHandler Creating UpdateShardHandler HTTP client with params: socketTimeout=600000&connTimeout=60000&retry=true
2338 INFO (main) [ ] o.a.s.l.LogWatcher SLF4J impl is org.slf4j.impl.Log4jLoggerFactory
2339 INFO (main) [ ] o.a.s.l.LogWatcher Registering Log Listener [Log4j (org.slf4j.impl.Log4jLoggerFactory)]
2341 INFO (main) [ ] o.a.s.c.CoreContainer Security conf doesn't exist. Skipping setup for authorization module.
2341 INFO (main) [ ] o.a.s.c.CoreContainer No authentication plugin used.
2379 INFO (main) [ ] o.a.s.c.CoresLocator Looking for core definitions underneath /usr/local/rvm/gems/ruby-2.2.1/gems/sunspot_solr-2.2.3/solr/server/solr
2385 INFO (main) [ ] o.a.s.c.CoresLocator Found 0 core definitions
2389 INFO (main) [ ] o.a.s.s.SolrDispatchFilter user.dir=/usr/local/rvm/gems/ruby-2.2.1/gems/sunspot_solr-2.2.3/solr/server
2390 INFO (main) [ ] o.a.s.s.SolrDispatchFilter SolrDispatchFilter.init() done
2435 INFO (main) [ ] o.e.j.s.h.ContextHandler Started o.e.j.w.WebAppContext#1f3ff9d4{/solr,file:/usr/local/rvm/gems/ruby-2.2.1/gems/sunspot_solr-2.2.3/solr/server/solr-webapp/webapp/,AVAILABLE}{/usr/local/rvm/gems/ruby-2.2.1/gems/sunspot_solr-2.2.3/solr/server/solr-webapp/webapp}
2458 INFO (main) [ ] o.e.j.s.ServerConnector Started ServerConnector#649ad901{HTTP/1.1}{0.0.0.0:8080}
2458 INFO (main) [ ] o.e.j.s.Server Started #3074ms
6677 INFO (qtp207710485-20) [ ] o.a.s.s.SolrDispatchFilter [admin] webapp=null path=/admin/cores params={indexInfo=false&_=1453515549920&wt=json} status=0 QTime=58
6796 INFO (qtp207710485-19) [ ] o.a.s.s.SolrDispatchFilter [admin] webapp=null path=/admin/info/system params={_=1453515550069&wt=json} status=0 QTime=23
I can also access localhost:8080/solr.
However, when I run RAILS_ENV=production rake sunspot:solr:start --trace, I get a normal trace:
** Invoke sunspot:solr:start (first_time)
** Invoke environment (first_time)
** Execute environment
** Execute sunspot:solr:start
Removing stale PID file at /home/rails/webapp/solr/pids/production/sunspot-solr-production.pid
Successfully started Solr ...
Yet I can't access localhost:8080 (it gives me an ERR_CONNECTION_REFUSED), and when I try to do anything else involving Solr, I get an Errno::ECONNREFUSED: Connection refused error.
For example, when I run RAILS_ENV=production rake sunspot:solr:reindex after starting Solr, I get:
Errno::ECONNREFUSED: Connection refused - {:data=>"<?xml version=\"1.0\" encoding=\"UTF-8\"?><delete><query>type:Piece</query></delete>", :headers=>{"Content-Type"=>"text/xml"}, :method=>:post, :params=>{:wt=>:ruby}, :query=>"wt=ruby", :path=>"update", :uri=>#<URI::HTTP http://localhost:8080/solr/update?wt=ruby>, :open_timeout=>nil, :read_timeout=>nil, :retry_503=>nil, :retry_after_limit=>nil}
....
Errno::ECONNREFUSED: Connection refused - connect(2) for "localhost" port 8080
My sunspot.yml file looks like this, after this:
production:
solr:
hostname: localhost
port: 8080 #tomcat defaults to port 8080
path: /solr
log_level: WARNING
solr_home: solr
The Solr server was working fine before. The connection errors started when I tried to seed the production db and received an EOFError: end of file reached. I can post the full trace for that error if needed.
Please help!
I had a similar situation happen. Sunspot would say that Solr had started successfully, but it never actually did start. Turns out I was using Java 1.6 and installing JDK 8 fixed it for me.