Can't test with rspec and elasticsearch-extensions on Ubuntu - ruby-on-rails

I've followed this guide to run integration tests in rspec with the elasticsearch-extensions gem, but at the moment I run Elasticsearch::Extensions::Test::Cluster.start(port: 9250, nodes: 1, timeout: 120), the following error is thrown:
Starting 2 Elasticsearch nodes...Exception in thread "main" ElasticsearchException[Failed to load logging configuration]; nested: NoSuchFileException[/usr/share/elasticsearch/config];
Likely root cause: java.nio.file.NoSuchFileException: /usr/share/elasticsearch/config
at sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at sun.nio.fs.UnixFileAttributeViews$Basic.readAttributes(UnixFileAttributeViews.java:55)
at sun.nio.fs.UnixFileSystemProvider.readAttributes(UnixFileSystemProvider.java:144)
at sun.nio.fs.LinuxFileSystemProvider.readAttributes(LinuxFileSystemProvider.java:99)
at java.nio.file.Files.readAttributes(Files.java:1737)
at java.nio.file.FileTreeWalker.getAttributes(FileTreeWalker.java:225)
at java.nio.file.FileTreeWalker.visit(FileTreeWalker.java:276)
at java.nio.file.FileTreeWalker.walk(FileTreeWalker.java:322)
at java.nio.file.Files.walkFileTree(Files.java:2662)
at org.elasticsearch.common.logging.log4j.LogConfigurator.resolveConfig(LogConfigurator.java:142)
at org.elasticsearch.common.logging.log4j.LogConfigurator.configure(LogConfigurator.java:103)
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:259)
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:35)
Refer to the log for complete error details.
Exception in thread "main" ElasticsearchException[Failed to load logging configuration]; nested: NoSuchFileException[/usr/share/elasticsearch/config];
Likely root cause: java.nio.file.NoSuchFileException: /usr/share/elasticsearch/config
at sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at sun.nio.fs.UnixFileAttributeViews$Basic.readAttributes(UnixFileAttributeViews.java:55)
at sun.nio.fs.UnixFileSystemProvider.readAttributes(UnixFileSystemProvider.java:144)
at sun.nio.fs.LinuxFileSystemProvider.readAttributes(LinuxFileSystemProvider.java:99)
at java.nio.file.Files.readAttributes(Files.java:1737)
at java.nio.file.FileTreeWalker.getAttributes(FileTreeWalker.java:225)
at java.nio.file.FileTreeWalker.visit(FileTreeWalker.java:276)
at java.nio.file.FileTreeWalker.walk(FileTreeWalker.java:322)
at java.nio.file.Files.walkFileTree(Files.java:2662)
at org.elasticsearch.common.logging.log4j.LogConfigurator.resolveConfig(LogConfigurator.java:142)
at org.elasticsearch.common.logging.log4j.LogConfigurator.configure(LogConfigurator.java:103)
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:259)
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:35)
Refer to the log for complete error details.
I guess is because I have installed ES in my Ubuntu machine, which adds the config files into /etc/elasticsearch/ and also is already running as a service. I tried to symlink the folder, but the folder permissions are:
drwxr-x--- 3 root elasticsearch 4,0K nov 8 17:32 elasticsearch
So I tried to add my user to elasticsearch group but I had no luck. Any ideas?

I finished just copying the files and changing the owner:
sudo cp -r /etc/elasticsearch/ /usr/share/elasticsearch/config
sudo chown $(whoami):$(whoami) /usr/share/elasticsearch/config/*
Maybe not the best solution, but effective.

Related

How to fix springboot error Failed to retrieve RMIServer stub: javax.naming.CommunicationException error during JRMP connection establishment

I am having issue with springboot maven project and cassandra in docker.
I want to execute mvn verify and in pre-integration-test to start cassandra docker and api server, execute test and then in post-integration-test to stop server but I am facing error Could not contact Spring Boot application: Failed to retrieve RMIServer stub: javax.naming.CommunicationException [Root exception is java.rmi.ConnectIOException: error during JRMP connection establishment;
I have created github repo so you can easily reproduce error.
Steps to reproduce:
git clone https://github.com/gandra/docker-cassandra-with-initial-seed.git
cd docker-cassandra-with-initial-seed
mvn verify
this will produce error like this:
[INFO]
[INFO] --- spring-boot-maven-plugin:2.2.2.RELEASE:start (api-server-start) # docker-cassandra-with-initial-seed ---
[INFO] Attaching agents: []
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 18.501 s
[INFO] Finished at: 2021-01-21T18:39:05+01:00
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.springframework.boot:spring-boot-maven-plugin:2.2.2.RELEASE:start (api-server-start) on project docker-cassandra-with-initial-seed: Could not contact Spring Boot application: Failed to retrieve RMIServer stub: javax.naming.CommunicationException [Root exception is java.rmi.ConnectIOException: error during JRMP connection establishment; nested exception is:
[ERROR] java.io.EOFException]
[ERROR] -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
Any idea how to fix.
p.s.
I am working on mac and this solution would work on mac and linux as well.
Code https://github.com/gandra/docker-cassandra-with-initial-seed is fixed and now working.
Basically 2 thing are changed:
Removed port mapping 9001 from docker-cassandra.sh.
Added sleep 20 seconds in cassandra_start function in docker-cassandra.sh in order to give time cassandra to properly boot up. Not nice but at least working.
Now followiung code should work with 1 integration test executing successfully:
git clone https://github.com/gandra/docker-cassandra-with-initial-seed.git
cd docker-cassandra-with-initial-seed
mvn verify
If you are using Windows, there was an issue solved in 2020 that is a little bit tricky to find (here the complete thread on Github if you want to read the full story).
Just to recap, the solution is to fix the jmxPort. For example:
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
<configuration>
<jmxPort>9011</jmxPort>
</configuration>
[...]
The problem here is that he tries to use a port that is already in use.
I had a similar problem with spring boot (not with cassandra).
Two ways i see to solving this:
First way:
Find the program that is using that port (in my case it was "Intel(R) Graphics Command Center" on port 9001). (you can use netstat /a /b in command prompt) or use tcpview (external free program)
disable the program
Profit
Second way
configuring the jmx port in your pom on the spring-boot-maven-plugin
(Answer from sixro)
BUT, i don't like this way because you basically force the others in your project to use your workaround, while the issue is on your machien.

Apache nifi is not starting up

I am trying to start Apache nifi version 1.2.0 on window 8 machine. It used to start properly. After I restarted the system the nifi is not starting at all. I had check status Its keep getting "Apacha Nifi not running".
Below are logs from nifi.bootstrap.log file:-
2017-07-05 15:41:57,105 WARN [NiFi Bootstrap Command Listener]
org.apache.nifi.bootstrap.RunNiFi Failed to set permissions so that only the
owner can read pid file E:\softwares\nifi-1.2.0\bin\..\run\nifi.pid; this
may allows others to have access to the key needed to communicate with NiFi.
Permissions should be changed so that only the owner can read this file
2017-07-05 15:41:57,142 WARN [NiFi Bootstrap Command Listener]
org.apache.nifi.bootstrap.RunNiFi Failed to set permissions so that only the
owner can read status file E:\softwares\nifi-1.2.0\bin\..\run\nifi.status;
this may allows others to have access to the key needed to communicate with
NiFi. Permissions should be changed so that only the owner can read this
file
2017-07-05 15:41:57,168 INFO [NiFi Bootstrap Command Listener]
org.apache.nifi.bootstrap.RunNiFi Apache NiFi now running and listening for
Bootstrap requests on port 50765
2017-07-05 15:43:12,077 ERROR [NiFi logging handler] org.apache.nifi.StdErr
Failed to start web server: Unable to start Flow Controller.
2017-07-05 15:43:12,078 ERROR [NiFi logging handler] org.apache.nifi.StdErr
Shutting down...
2017-07-05 15:43:14,501 INFO [main] org.apache.nifi.bootstrap.RunNiFi NiFi
never started. Will not restart NiFi
Stack trace from nifi.app.log: -
2017-07-05 15:43:12,077 WARN [main] org.apache.nifi.web.server.JettyServer Failed to start web server... shutting down.
org.apache.nifi.web.NiFiCoreException: Unable to start Flow Controller.
at org.apache.nifi.web.contextlistener.ApplicationStartupContextListener.contextInitialized(ApplicationStartupContextListener.java:88)
at org.eclipse.jetty.server.handler.ContextHandler.callContextInitialized(ContextHandler.java:876)
at org.eclipse.jetty.servlet.ServletContextHandler.callContextInitialized(ServletContextHandler.java:532)
at org.eclipse.jetty.server.handler.ContextHandler.startContext(ContextHandler.java:839)
at org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:344)
at org.eclipse.jetty.webapp.WebAppContext.startWebapp(WebAppContext.java:1480)
at org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1442)
at org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:799)
at org.eclipse.jetty.servlet.ServletContextHandler.doStart(ServletContextHandler.java:261)
at org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:540)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:131)
at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:113)
at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:113)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:131)
at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:105)
at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:113)
at org.eclipse.jetty.server.handler.gzip.GzipHandler.doStart(GzipHandler.java:290)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:131)
at org.eclipse.jetty.server.Server.start(Server.java:452)
at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:105)
at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:113)
at org.eclipse.jetty.server.Server.doStart(Server.java:419)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at org.apache.nifi.web.server.JettyServer.start(JettyServer.java:695)
at org.apache.nifi.NiFi.<init>(NiFi.java:160)
at org.apache.nifi.NiFi.main(NiFi.java:267)
Caused by: java.io.IOException: Expected to read a Sentinel Byte of '1' but got a value of '0' instead
at org.apache.nifi.repository.schema.SchemaRecordReader.readRecord(SchemaRecordReader.java:65)
at org.apache.nifi.controller.repository.SchemaRepositoryRecordSerde.deserializeRecord(SchemaRepositoryRecordSerde.java:115)
at org.apache.nifi.controller.repository.SchemaRepositoryRecordSerde.deserializeEdit(SchemaRepositoryRecordSerde.java:109)
at org.apache.nifi.controller.repository.SchemaRepositoryRecordSerde.deserializeEdit(SchemaRepositoryRecordSerde.java:46)
at org.wali.MinimalLockingWriteAheadLog$Partition.recoverNextTransaction(MinimalLockingWriteAheadLog.java:1096)
at org.wali.MinimalLockingWriteAheadLog.recoverFromEdits(MinimalLockingWriteAheadLog.java:459)
at org.wali.MinimalLockingWriteAheadLog.recoverRecords(MinimalLockingWriteAheadLog.java:301)
at org.apache.nifi.controller.repository.WriteAheadFlowFileRepository.loadFlowFiles(WriteAheadFlowFileRepository.java:381)
at org.apache.nifi.controller.FlowController.initializeFlow(FlowController.java:712)
at org.apache.nifi.controller.StandardFlowService.initializeController(StandardFlowService.java:953)
at org.apache.nifi.controller.StandardFlowService.load(StandardFlowService.java:534)
at org.apache.nifi.web.contextlistener.ApplicationStartupContextListener.contextInitialized(ApplicationStartupContextListener.java:72)
... 28 common frames omitted
Thanks in advance
After Googling on this error "Caused by: java.io.IOException: Expected to read a Sentinel Byte of '1' but got a value of '0' instead" I found that this error indicates a partial write to the repos.
Here are a couple of things you can check/try to bring your Dataflow back online ;
check if your dsks are not full
Did you launch nifi with the same user ? Did you run it with administrator privileges ?
You can backup/move your repositories and try to start Nifi with empty repositories, you will still have your dataflows there but any file that was processing when you shutdown will be gone.
Could you please try that ?
I think the issue is with incompatible java version, use JAVA 8 version.
If you haven't set JAVA_HOME then set in environment variables with path Like "C:/program files/jdk1.8"
Jira addressing when NiFi run with java 9 version and the issue not resolved yet
https://issues.apache.org/jira/browse/NIFI-4419

LINKERD: Unable to build docker image from linkerd

https://github.com/linkerd/linkerd#docker
From the instruction on Readme, I have executed the following commands,
; linkerd/docker ;namerd/docker
I get the following exception,
[info] Done packaging.
[trace] Stack trace suppressed: run last linkerd/bundle:docker for the full output.
[error] (linkerd/bundle:docker) java.io.IOException: Cannot run program "docker" (in directory "/home/shaikk/linkerd/linkerd/target/docker"): error=2, No such file or directory
[error] Total time: 284 s, completed Mar 6, 2017 9:13:49 AM
I think the No such file or directory error message is referring to the docker binary itself. Can you try running which docker to see if it's in your path? If it's not there, you can install it using the instructions here: https://docs.docker.com/engine/installation/#platform-support-matrix

Fresh download and start of Neo4j 1.9.9: fails to start

Steps to recreate:
Downloaded the tar.gz file to Ubuntu 14.04
tar zxvf neo4j.tar.gz
sudo neo4j/bin/neo4j install
sudo service neo4j-service start
Getting the following error:
16:02:44.205 [main] INFO o.n.s.enterprise.EnterpriseNeoServer - Setting startup timeout to: 120000ms based on -1
org.neo4j.kernel.lifecycle.LifecycleException: Component 'org.neo4j.kernel.impl.transaction.XaDataSourceManager#6c04c230' failed to stop. Please see attached cause exception.
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.stop(LifeSupport.java:530)
at org.neo4j.kernel.lifecycle.LifeSupport.stop(LifeSupport.java:157)
at org.neo4j.kernel.lifecycle.LifeSupport.start(LifeSupport.java:123)
at org.neo4j.kernel.InternalAbstractGraphDatabase.run(InternalAbstractGraphDatabase.java:284)
at org.neo4j.kernel.EmbeddedGraphDatabase.<init>(EmbeddedGraphDatabase.java:116)
at org.neo4j.graphdb.factory.GraphDatabaseFactory$1.newDatabase(GraphDatabaseFactory.java:96)
at org.neo4j.graphdb.factory.GraphDatabaseBuilder.newGraphDatabase(GraphDatabaseBuilder.java:207)
at org.neo4j.server.enterprise.EnterpriseDatabase$DatabaseMode$1.createDatabase(EnterpriseDatabase.java:42)
at org.neo4j.server.enterprise.EnterpriseDatabase.start(EnterpriseDatabase.java:78)
at org.neo4j.server.AbstractNeoServer.start(AbstractNeoServer.java:170)
at org.neo4j.server.Bootstrapper.start(Bootstrapper.java:103)
at org.neo4j.server.Bootstrapper.main(Bootstrapper.java:57)
For full stack trace see:
https://gist.github.com/henry74/6a14fa76fffabf6d4313

Error with bulk-loading in cassandra

Good Morning,
I'm trying to implement the massive data dump cassandra example using the bulk-loading (http://www.datastax.com/dev/blog/bulk-loading) as a guide.
In the example resolve dependencies with the script (http://www.datastax.com/wp-content/uploads/2011/08/DataImport) but I find that the dependencies to be covered with cassandra libraries not located in the directories listed here because the version I'm working with dse with cassandra 2.0. Well then trying to cover such dependencies get the following script.
#!/bin/sh
# paths to the cassandra source tree, cassandra jar and java
CASSANDRA_HOME="/usr/share/dse/cassandra"
# CASSANDRA_JAR="./apache-cassandra-2.0.10.jar"
JAVA=`which java`
# Java classpath. Must include:
# - directory of DataImportExample
# - directory with cassandra/log4j config files
# - cassandra jar
# - cassandra depencies jar
CLASSPATH=".:/usr/share/dse/dse.jar:./slf4j-1.7.7/slf4-nop-1.7.7.jar:./slf4j-1.7.7/slf4j-simple-1.7.7.jar:/etc/dse/cassandra"
for jar in $CASSANDRA_HOME/lib/*.jar; do
CLASSPATH=$CLASSPATH:$jar
done
$JAVA -ea -cp $CLASSPATH -Xmx256M \
-Dlog4j.configuration=log4j-tools.properties \
CassandraDataBulk "$#"
CASSANDRA_JAR is commented and I use "cassandra-all-2.0.8.39.jar" located in the folder "/ usr / share / dse / cassandra / lib" and is already included.
I solve slf4j dependencies downloading that in 1.7.7 version.
Due to the difference of cassandra version also I had to accustom SSTableSimpleUnsortedWriter builder.
IPartitioner partitioner = new RandomPartitioner();
SSTableSimpleUnsortedWriter sourcesWriter = new SSTableSimpleUnsortedWriter(
directory,
partitioner,
keyspace,
table,
AsciiType.instance,
null,
64
);
It seems that the problem today is that there are still dependencies.
Under, the trace error I get.
There is a dependency but it seems that being "org.apache.commons.configuration.ConfigurationRuntimeException" the real problem could be another, Could have a bad configuration "cassandra.yaml"?
Thanks,
A greeting!
[dmdb#vm-dmdb01 ~]$ ./init_env.sh export.csv
[main] ERROR org.apache.cassandra.cql3.QueryProcessor - Unable to initialize MemoryMeter (jamm not specified as javaagent). This means Cassandra will be unable to measure object sizes accurately and may consequently OOM.
[main] INFO org.apache.cassandra.config.YamlConfigurationLoader - Loading settings from file:/etc/dse/cassandra/cassandra.yaml
[main] INFO org.apache.cassandra.config.DatabaseDescriptor - Data files directories: [/data01, /data02]
[main] INFO org.apache.cassandra.config.DatabaseDescriptor - Commit log directory: /datatmp/commitlog
[main] INFO org.apache.cassandra.config.DatabaseDescriptor - DiskAccessMode 'auto' determined to be mmap, indexAccessMode is mmap
[main] INFO org.apache.cassandra.config.DatabaseDescriptor - disk_failure_policy is stop
[main] INFO org.apache.cassandra.config.DatabaseDescriptor - commit_failure_policy is stop
[main] INFO org.apache.cassandra.config.DatabaseDescriptor - Global memtable threshold is enabled at 61MB
[main] INFO com.datastax.bdp.snitch.Workload - Setting my workload to Cassandra
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/commons/configuration/ConfigurationRuntimeException
at com.datastax.bdp.config.ConfigUtil.defaultValue(ConfigUtil.java:18)
at com.datastax.bdp.config.DseConfig.<clinit>(DseConfig.java:51)
at com.datastax.bdp.snitch.DseDelegateSnitch.<init>(DseDelegateSnitch.java:42)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at java.lang.Class.newInstance(Class.java:374)
at org.apache.cassandra.utils.FBUtilities.construct(FBUtilities.java:488)
at org.apache.cassandra.config.DatabaseDescriptor.createEndpointSnitch(DatabaseDescriptor.java:508)
at org.apache.cassandra.config.DatabaseDescriptor.applyConfig(DatabaseDescriptor.java:341)
at org.apache.cassandra.config.DatabaseDescriptor.<clinit>(DatabaseDescriptor.java:111)
at org.apache.cassandra.io.sstable.AbstractSSTableSimpleWriter.<init>(AbstractSSTableSimpleWriter.java:50)
at org.apache.cassandra.io.sstable.SSTableSimpleUnsortedWriter.<init>(SSTableSimpleUnsortedWriter.java:96)
at org.apache.cassandra.io.sstable.SSTableSimpleUnsortedWriter.<init>(SSTableSimpleUnsortedWriter.java:80)
at org.apache.cassandra.io.sstable.SSTableSimpleUnsortedWriter.<init>(SSTableSimpleUnsortedWriter.java:91)
at CassandraDataBulk.main(CassandraDataBulk.java:35)
Caused by: java.lang.ClassNotFoundException: org.apache.commons.configuration.ConfigurationRuntimeException
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
... 17 more
You are missing a "javaagent" parameter in your java call. Add the following:
-javaagent:$CASSANDRA_HOME/lib/jamm-0.2.5.jar
Your final call should look like:
$JAVA -ea -cp $CLASSPATH -Xmx256M \
-Dlog4j.configuration=log4j-tools.properties \
-javaagent:$CASSANDRA_HOME/lib/jamm-0.2.5.jar
CassandraDataBulk "$#"
NOTE: Adjust the path to jamm.jar as necessary
Reference
As for the runtime configuration error, download apache commons 'lang' library and include it to your classpath.
Download here
If you receive NEW exceptions after implementing the fix, download google-common.jar and guava-16.0.1.jar and include them as well to your classpath. These are all of the JARs that my own bulk loader required so far.

Resources