Using Eclipse on Windows, a vertx Verticle with a misconfigured cluster.xml shows the following error in the Eclipse console:
11:46:18.536 [hz._hzInstance_1_dev.generic-operation.thread-0] ERROR com.hazelcast.cluster - [192.168.25.8]:5701 [dev] [3.5.2] Node could not join cluster. A Configuration mismatch was detected: Incompatible joiners! expected: multicast, found: tcp-ip Node is going to shutdown now!
11:46:22.529 [vert.x-worker-thread-0] ERROR com.hazelcast.cluster.impl.TcpIpJoiner - [192.168.25.8]:5701 [dev] [3.5.2] com.hazelcast.core.HazelcastInstanceNotActiveException: Hazelcast instance is not active!
This is fine, I know to reconfigure the cluster for multicast. The problem is when I deploy the same code and configuration to Linux, and run it as a fat jar then the same log doesn't show either the hz thread or the vertx worker thread logs. Instead it shows the verticle logs as:
2015-11-05 12:03:09,329 Starting clustered Vertx
2015-11-05 12:03:13,549 ERROR: VerticleService failed to start: java.lang.NullPointerException
So if I run on Linux the log to tell me there's a misconfiguration isn't showing. There's something I am missing in the vertx / maven log config but I don't know what. Maven properties are as follows:
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<exec.mainClass>main.java.eiger.isct.service.Verticle</exec.mainClass>
<log4j.configurationFile>log4j2.xml</log4j.configurationFile>
<hazelcast.logging.type>log4j2</hazelcast.logging.type>
</properties>
and I start the fat jar using:
java -Dlog4j.configuration=log4j2.xml -jar Verticle-0.5-SNAPSHOT-fat.jar
How can I get the hz thread and vertx thread to log on Linux?
I've tried adding a vertx-default-jul-logging.properties file below to the maven resources dir but no luck.
com.hazelcast.level=ALL
java.util.logging.ConsoleHandler.level=ALL
java.util.logging.FileHandler.level=ALL
THANKS for your comment.
Vertx has started logging having added
-Djava.util.logging.config.file=../logging.properties
to the java start command and with the default logging.properties like (and this is a nice config for lower level stuff):
handlers=java.util.logging.ConsoleHandler,java.util.logging.FileHandler
java.util.logging.SimpleFormatter.format=%1$tY-%1$tm-%1$td %1$tH:%1$tM:%1$tS:%1$tL %4$s %2$s %5$s%6$s%n
java.util.logging.ConsoleHandler.formatter=java.util.logging.SimpleFormatter
java.util.logging.ConsoleHandler.level=ALL
java.util.logging.FileHandler.level=ALL
java.util.logging.FileHandler.formatter=java.util.logging.SimpleFormatter
java.util.logging.FileHandler.pattern=../logs/vertx.log
.level=ALL
io.vertx.level=ALL
com.hazelcast.level=ALL
io.netty.util.internal.PlatformDependent.level=ALL
and vertx is logging to ../logs/vertx.log on Linux
Related
I have installed apache Helix 1.0.0 version. I am able to setup a cluster and add resources.
But when i try to start run-helix-controller.sh it gives below error.
Here is command : ./run-helix-controller.sh --zkSvr localhost:2181 --cluster jbpm-cluster
ERROR
[2020-05-20 06:22:29,773] [INFO ] [main] [org.apache.helix.controller.HelixControllerMain:208] - Cluster manager started, zkServer: lpwaidqu02:2181, clusterName:jbpm-cluster, controllerName:null, mode:STANDALONE
Exception in thread "main" java.lang.NoSuchFieldError: Rebalancer
at org.apache.helix.InstanceType.(InstanceType.java:39)
at org.apache.helix.controller.HelixControllerMain.startHelixController(HelixControllerMain.java:156)
at org.apache.helix.controller.HelixControllerMain.main(HelixControllerMain.java:212)
Have you tried the steps in this guide?
http://helix.apache.org/0.9.7-docs/Quickstart.html
Seems like there are a few steps (like "add cluster") before ./run-helix-controller.sh command.
I've recently installed Jenkins on a cheap VM on Azure. The specs are very low, since I use this server for testing the setup: 1vCPU & 1GB RAM. There will usually only be 1 build at the same time, with a max. of 3, in very rare occassions.
During the build process from Jenkins quite frequently my server would crash completely and stay so for +- 10 - 15 minutes until being able to be used again.
I checked the processes on the server and this is the result:
The full line is like this:
/etc/alternatives/java -Dcom.sun.akuma.Daemon=daemonized -Djava.awt.headless=true -DJENKINS_HOME=/var/lib/jenkins -jar /usr/lib/jenkins/jenkins.war --logfile=/var/log/jenkins/jenkins.log --webroot=/var/cache/jenkins/war --daemon --httpPort=8080 --debug=5 --handlerCountMax=100 --handlerCountMaxIdle=20
It is the same for every single one of those daemons, not a single parameter is different.
Is this normal behavior, and is this the reason why my server is crashing? Or are my specs just too low for Jenkins to run on to?
Thanks in advance!
EDIT:
My jenkins.log file looks pretty normal except for one NullPointerException that keeps coming back up:
2020-01-08 12:43:17.702+0000 [id=148] WARNING h.ExpressionFactory2$JexlExpression#evaluate: Caught exception evaluating: h.filterDescriptors(it,attrs.descriptors) in /configure. Reason: java.lang.NullPointerException: Descriptor list is null for context 'class hudson.model.Hudson' in thread 'Handling GET /configure from 85.154.65.124 : qtp2085857771-148 Jenkins/configure.jelly GlobalLibraries/config.jelly LibraryConfiguration/config.jelly SCMRetriever/DescriptorImpl/config.jelly MultiSCM/DescriptorImpl/config.jelly'
java.lang.NullPointerException: Descriptor list is null for context 'class hudson.model.Hudson' in thread 'Handling GET /configure from 85.154.65.124 : qtp2085857771-148 Jenkins/configure.jelly GlobalLibraries/config.jelly LibraryConfiguration/config.jelly SCMRetriever/DescriptorImpl/config.jelly MultiSCM/DescriptorImpl/config.jelly'
at hudson.model.DescriptorVisibilityFilter.apply(DescriptorVisibilityFilter.java:73)
...
I have a JAVA application which I want to monitor its JMX attributes using telegraf tool.
The tool provides jolikia plugin to monitor JMX attributes. I have added following dependencies to my app's pom.xml file regarding Maven section of Jolokia documentation:
<dependency>
<groupId>org.jolokia</groupId>
<artifactId>jolokia-core</artifactId>
<version>1.3.7</version>
</dependency>
<dependency>
<groupId>org.jolokia</groupId>
<artifactId>jolokia-client-java</artifactId>
<version>1.3.7</version>
</dependency>
This is my /etc/telegraf/telegraf.conf file:
[[inputs.jolokia]]
context = "/jolokia/"
[[inputs.jolokia.servers]]
name = "wr-core"
host = "192.168.100.175"
port = "1998"
[[inputs.jolokia.metrics]]
name = "send_success"
mbean = "wr-core:type=monitor,name=execution"
attribute = "MessageSendSuccessCount"
The application is up in the provided IP/port (I can connect to it with jconsole). The application has a monitoring section which its object name (as shown in jconsole) is wr-core:type=monitor,name=execution and has the attribute MessageSendSuccessCount. But when I start telegraf service, following error occurs:
Jan 14 14:30:32 ZiZi telegraf[17258]: 2018-01-14T11:00:32Z E! Error in plugin [inputs.jolokia]: error performing request: Error decoding JSON response: invalid character '\x00' looking for beginning of value:
Note that 1998 is my app's jmx port. I also tried using 8778 which is jolokia-agent port; got:
Jan 14 14:40:03 ZiZi telegraf[9150]: 2018-01-14T11:10:03Z E! Error in plugin [inputs.jolokia]: error performing request: Post http://192.168.100.175:8778/jolokia/: dial tcp 192.168.100.175:8778: getsockopt: connection refused
EDIT 1:
I have checked my CLASSPATH and both jolokia-client and jolokia-core were listed: ../lib/jolokia-client-java-1.3.7.jar:../lib/jolokia-core-1.3.7.jar.
EDIT 2:
I have put following lines into my app's execution file:
JOLOKIA_OPTS=-javaagent:$LIB_PATH/jolokia-core-java-1.3.7.jar=port=8778,host=0.0.0.0
JAVA_OPTS="-mx4096M $JAVA_OPTS $JACOCO_OPTS $JOLOKIA_OPTS"
But when I run the file, I get this error (even though ../lib/jolokia-core-java-1.3.7.jar has been listed in the CLASSPATH):
Error opening zip file or JAR manifest missing : ../lib/jolokia-core-java-1.3.7.jar
Error occurred during initialization of VM
agent library failed to init: instrument
Found the solution.
I have skipped maven solution and tried the javaagent approach, but I had misunderstood the usage of javaagent previously; I should address jolokia jvm agent (this helped):
JOLOKIA_OPTS=-javaagent:/root/jolokia-jvm-1.3.7-agent.jar=port=8778,host=0.0.0.0
JAVA_OPTS="-mx4096M $JAVA_OPTS $JACOCO_OPTS $JOLOKIA_OPTS"
Now, my app starts with this log (successfully):
I> No access restrictor found, access to any MBean is allowed
Jolokia: Agent started with URL http://192.168.100.175:8778/jolokia/
In the other side there is no error in the telegraf console for jolokia anymore.
All the observation are implying the jolokia jvm library has been started and works successfully.
I have also found jolokia jmx documentation for using it as dependencies in the project; but since I'm not a JAVA expert (I'm testing the app), I prefer to currently use the javaagent approach and leave it for future study/experience. BTW, it may help the others.
EDIT 1:
I have found and deployed jolokia jvm agent using its spring support.
Configuring it in the spring XML file, I can now have jolokia jvm agent start listening at my app's startup.
I have installed alfresco & solr in different tomcat instances using below docs url. alfresco share was running file but when i run solr instance and see below error.
http://docs.alfresco.com/5.1/tasks/solr4-install-config.html
generated secure keys for Solr communication.
http://docs.alfresco.com/5.1/tasks/generate-keys-solr4.html
2016-07-18 13:25:30,037 ERROR [solr.tracker.AbstractTracker] [SolrTrackerSche
ler_Worker-14] Tracking failed
org.alfresco.error.AlfrescoRuntimeException: 06180034 api/solr/aclchangesets
turn status:403
at org.alfresco.solr.client.SOLRAPIClient.getAclChangeSets(SOLRAPIClie
.java:162)
at org.alfresco.solr.tracker.AclTracker.checkRepoAndIndexConsistency(A
Tracker.java:335)
at org.alfresco.solr.tracker.AclTracker.trackRepository(AclTracker.jav
313)
at org.alfresco.solr.tracker.AclTracker.doTrack(AclTracker.java:104)
at org.alfresco.solr.tracker.AbstractTracker.track(AbstractTracker.jav
185)
at org.alfresco.solr.tracker.TrackerJob.execute(TrackerJob.java:47)
at org.quartz.core.JobRunShell.run(JobRunShell.java:216)
at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool
ava:563)
2016-07-18 13:25:30,029 ERROR [solr.tracker.AbstractTracker] [SolrTrackerSche
ler_Worker-6] Tracking failed
org.alfresco.error.AlfrescoRuntimeException: 06180033 api/solr/aclchangesets
turn status:403
at org.alfresco.solr.client.SOLRAPIClient.getAclChangeSets(SOLRAPIClie
.java:162)
at org.alfresco.solr.tracker.AclTracker.checkRepoAndIndexConsistency(A
Tracker.java:335)
at org.alfresco.solr.tracker.AclTracker.trackRepository(AclTracker.jav
313)
at org.alfresco.solr.tracker.AclTracker.doTrack(AclTracker.java:104)
at org.alfresco.solr.tracker.AbstractTracker.track(AbstractTracker.jav
185)
at org.alfresco.solr.tracker.TrackerJob.execute(TrackerJob.java:47)
at org.quartz.core.JobRunShell.run(JobRunShell.java:216)
at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool
ava:563)
2016-07-18 13:25:30,056 ERROR [solr.tracker.AbstractTracker] [SolrTrackerSche
ler_Worker-11] Tracking failed
org.alfresco.error.AlfrescoRuntimeException: 06180036 GetModelsDiff return st
us is 403
at org.alfresco.solr.client.SOLRAPIClient.getModelsDiff(SOLRAPIClient.
va:1157)
at org.alfresco.solr.tracker.ModelTracker.trackModelsImpl(ModelTracker
ava:249)
at org.alfresco.solr.tracker.ModelTracker.trackModels(ModelTracker.jav
207)
at org.alfresco.solr.tracker.ModelTracker.doTrack(ModelTracker.java:16
at org.alfresco.solr.tracker.AbstractTracker.track(AbstractTracker.jav
185)
at org.alfresco.solr.tracker.TrackerJob.execute(TrackerJob.java:47)
at org.quartz.core.JobRunShell.run(JobRunShell.java:216)
at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool
ava:563)
I had a similar problem which has been solved by changing
clientAuth="false"
to
clientAuth="want"
in Tomcat 7 server.xml
(credit goes to https://issues.alfresco.com/jira/browse/ACE-4551?focusedCommentId=424920&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-424920)
I have written my own hadoop program and I can run using pseudo distribute mode in my own laptop, however, when I put the program in the cluster which can run example jar of hadoop, it by default launches the local job though I indicate the hdfs file path, below is the output, give suggestions?
./hadoop -jar MyRandomForest_oob_distance.jar hdfs://montana-01:8020/user/randomforest/input/genotype1.txt hdfs://montana-01:8020/user/randomforest/input/phenotype1.txt hdfs://montana-01:8020/user/randomforest/output1_distance/ hdfs://montana-01:8020/user/randomforest/input/genotype101.txt hdfs://montana-01:8020/user/randomforest/input/phenotype101.txt 33 500 1
12/03/16 16:21:25 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=
12/03/16 16:21:25 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
12/03/16 16:21:25 INFO mapred.JobClient: Running job: job_local_0001
12/03/16 16:21:25 INFO mapred.MapTask: io.sort.mb = 100
12/03/16 16:21:25 INFO mapred.MapTask: data buffer = 79691776/99614720
12/03/16 16:21:25 INFO mapred.MapTask: record buffer = 262144/327680
12/03/16 16:21:25 WARN mapred.LocalJobRunner: job_local_0001
java.io.FileNotFoundException: File /user/randomforest/input/genotype1.txt does not exist.
at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:361)
at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:245)
at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.<init>(ChecksumFileSystem.java:125)
at org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:283)
at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:356)
at Data.Data.loadData(Data.java:103)
at MapReduce.DearMapper.loadData(DearMapper.java:261)
at MapReduce.DearMapper.setup(DearMapper.java:332)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:142)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:621)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:305)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:177)
12/03/16 16:21:26 INFO mapred.JobClient: map 0% reduce 0%
12/03/16 16:21:26 INFO mapred.JobClient: Job complete: job_local_0001
12/03/16 16:21:26 INFO mapred.JobClient: Counters: 0
Total Running time is: 1 secs
LocalJobRunner has been chosen as your configuration most probably has the mapred.job.tracker property set to local or has not been set at all (in which case the default is local). To check, go to "wherever you extracted/installed hadoop"/etc/hadoop/ and see if the file mapred-site.xml exists (for me it did not, a file called mapped-site.xml.template was there). In that file (or create it if it doesn't exist) make sure it has the following property:
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
See the source for org.apache.hadoop.mapred.JobClient.init(JobConf)
What is the value of this configuration property in the hadoop configuration on the machine you are submitting this from? Also confirm that the hadoop executable you are running references this configuration (and that you don't have 2+ installations configured differently) - type which hadoop and trace any symlinks you come across.
Alternatively you can override this when you submit your job, if you know the JobTracker host and port number using the -jt option:
hadoop jar MyRandomForest_oob_distance.jar -jt hostname:port hdfs://montana-01:8020/user/randomforest/input/genotype1.txt hdfs://montana-01:8020/user/randomforest/input/phenotype1.txt hdfs://montana-01:8020/user/randomforest/output1_distance/ hdfs://montana-01:8020/user/randomforest/input/genotype101.txt hdfs://montana-01:8020/user/randomforest/input/phenotype101.txt 33 500 1
If you're using Hadoop 2 and your job is running locally instead of on the cluster, ensure that you have setup mapred-site.xml to contain the mapreduce.framework.name property with a value of yarn. You also need to set up an aux-service in yarn-site.xml
Checkout the Cloudera Hadoop 2 operator migration blog for more information.
I had the same problem that every mapreduce v2 (mrv2) or yarn task only ran with the mapred.LocalJobRunner
INFO mapred.LocalJobRunner: Starting task: attempt_local284299729_0001_m_000000_0
The Resourcemanager and Nodemanagers were accessible and the mapreduce.framework.name was set to yarn.
Setting the HADOOP_MAPRED_HOME before executing the job fixed the problem for me.
export HADOOP_MAPRED_HOME=/usr/lib/hadoop-mapreduce
cheers
dan