I have written my own hadoop program and I can run using pseudo distribute mode in my own laptop, however, when I put the program in the cluster which can run example jar of hadoop, it by default launches the local job though I indicate the hdfs file path, below is the output, give suggestions?
./hadoop -jar MyRandomForest_oob_distance.jar hdfs://montana-01:8020/user/randomforest/input/genotype1.txt hdfs://montana-01:8020/user/randomforest/input/phenotype1.txt hdfs://montana-01:8020/user/randomforest/output1_distance/ hdfs://montana-01:8020/user/randomforest/input/genotype101.txt hdfs://montana-01:8020/user/randomforest/input/phenotype101.txt 33 500 1
12/03/16 16:21:25 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=
12/03/16 16:21:25 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
12/03/16 16:21:25 INFO mapred.JobClient: Running job: job_local_0001
12/03/16 16:21:25 INFO mapred.MapTask: io.sort.mb = 100
12/03/16 16:21:25 INFO mapred.MapTask: data buffer = 79691776/99614720
12/03/16 16:21:25 INFO mapred.MapTask: record buffer = 262144/327680
12/03/16 16:21:25 WARN mapred.LocalJobRunner: job_local_0001
java.io.FileNotFoundException: File /user/randomforest/input/genotype1.txt does not exist.
at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:361)
at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:245)
at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.<init>(ChecksumFileSystem.java:125)
at org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:283)
at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:356)
at Data.Data.loadData(Data.java:103)
at MapReduce.DearMapper.loadData(DearMapper.java:261)
at MapReduce.DearMapper.setup(DearMapper.java:332)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:142)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:621)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:305)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:177)
12/03/16 16:21:26 INFO mapred.JobClient: map 0% reduce 0%
12/03/16 16:21:26 INFO mapred.JobClient: Job complete: job_local_0001
12/03/16 16:21:26 INFO mapred.JobClient: Counters: 0
Total Running time is: 1 secs
LocalJobRunner has been chosen as your configuration most probably has the mapred.job.tracker property set to local or has not been set at all (in which case the default is local). To check, go to "wherever you extracted/installed hadoop"/etc/hadoop/ and see if the file mapred-site.xml exists (for me it did not, a file called mapped-site.xml.template was there). In that file (or create it if it doesn't exist) make sure it has the following property:
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
See the source for org.apache.hadoop.mapred.JobClient.init(JobConf)
What is the value of this configuration property in the hadoop configuration on the machine you are submitting this from? Also confirm that the hadoop executable you are running references this configuration (and that you don't have 2+ installations configured differently) - type which hadoop and trace any symlinks you come across.
Alternatively you can override this when you submit your job, if you know the JobTracker host and port number using the -jt option:
hadoop jar MyRandomForest_oob_distance.jar -jt hostname:port hdfs://montana-01:8020/user/randomforest/input/genotype1.txt hdfs://montana-01:8020/user/randomforest/input/phenotype1.txt hdfs://montana-01:8020/user/randomforest/output1_distance/ hdfs://montana-01:8020/user/randomforest/input/genotype101.txt hdfs://montana-01:8020/user/randomforest/input/phenotype101.txt 33 500 1
If you're using Hadoop 2 and your job is running locally instead of on the cluster, ensure that you have setup mapred-site.xml to contain the mapreduce.framework.name property with a value of yarn. You also need to set up an aux-service in yarn-site.xml
Checkout the Cloudera Hadoop 2 operator migration blog for more information.
I had the same problem that every mapreduce v2 (mrv2) or yarn task only ran with the mapred.LocalJobRunner
INFO mapred.LocalJobRunner: Starting task: attempt_local284299729_0001_m_000000_0
The Resourcemanager and Nodemanagers were accessible and the mapreduce.framework.name was set to yarn.
Setting the HADOOP_MAPRED_HOME before executing the job fixed the problem for me.
export HADOOP_MAPRED_HOME=/usr/lib/hadoop-mapreduce
cheers
dan
Related
I posted about this over on the Isaac forums, but listing it here for visibility as well. I am trying to get the Isaac Realsense examples working on a Jetson Nano with my 435i (firmware downgraded to 5.11.15 per the Isaac documentation), but I've been unable to so far. I've got a Nano flashed with Jetpack4.3 and have installed all dependencies on both the desktop and the Nano. The realsense-viewer works fine, so I know the camera is functioning properly and is being detected by the Nano. However, when I run ./apps/samples/realsense_camera/realsense_camera it throws an error:
ERROR engine/alice/components/Codelet.cpp#229: Component 'camera/realsense' of type 'isaac::RealsenseCamera' reported FAILURE:
No device connected, please connect a RealSense device
ERROR engine/alice/backend/event_manager.cpp#42: Stopping node 'camera' because it reached status 'FAILURE'
I've attached the log of this output as well. I get the same error running locally on my desktop, but that's running through WSL so I was willing to write that off. Any suggestions would be greatly appreciated!
0m2020-06-15 17:18:20.620 INFO engine/alice/tools/websight.cpp#166: Loading websight...0m
33m2020-06-15 17:18:20.621 WARN engine/alice/backend/application_json_loader.cpp#174: This application does not have an explicit scheduler configuration. One will be autogenerated to the best of the system's abilities if possible.0m
0m2020-06-15 17:18:20.622 INFO engine/alice/backend/redis_backend.cpp#40: Successfully connected to Redis server.
0m
33m2020-06-15 17:18:20.623 WARN engine/alice/backend/backend.cpp#201: This application does not have an execution group configuration. One will be autogenerated to the best of the systems abilities if possible.0m
33m2020-06-15 17:18:20.623 WARN engine/gems/scheduler/scheduler.cpp#337: No default execution groups specified. Attempting to create scheduler configuration for 4 remaining cores. This may be non optimal for the system and application.0m
0m2020-06-15 17:18:20.623 INFO engine/gems/scheduler/scheduler.cpp#290: Scheduler execution groups are:0m
0m2020-06-15 17:18:20.623 INFO engine/gems/scheduler/scheduler.cpp#299: __BlockerGroup__: Cores = [3], Workers = No0m
0m2020-06-15 17:18:20.623 INFO engine/gems/scheduler/scheduler.cpp#299: __WorkerGroup__: Cores = [0, 1, 2], Workers = Yes0m
0m2020-06-15 17:18:20.660 INFO engine/alice/backend/modules.cpp#226: Loaded module 'packages/realsense/librealsense_module.so': Now has 45 components total0m
0m2020-06-15 17:18:20.679 INFO engine/alice/backend/modules.cpp#226: Loaded module 'packages/rgbd_processing/librgbd_processing_module.so': Now has 51 components total0m
0m2020-06-15 17:18:20.696 INFO engine/alice/backend/modules.cpp#226: Loaded module 'packages/sight/libsight_module.so': Now has 54 components total0m
0m2020-06-15 17:18:20.720 INFO engine/alice/backend/modules.cpp#226: Loaded module 'packages/viewers/libviewers_module.so': Now has 83 components total0m
90m2020-06-15 17:18:20.720 DEBUG engine/alice/application.cpp#348: Loaded 83 components: isaac::RealsenseCamera, isaac::alice::BufferAllocatorReport, isaac::alice::ChannelMonitor, isaac::alice::CheckJetsonPerformanceModel, isaac::alice::CheckOperatingSystem, isaac::alice::Config, isaac::alice::ConfigBridge, isaac::alice::ConfigLoader, isaac::alice::Failsafe, isaac::alice::FailsafeHeartbeat, isaac::alice::InteractiveMarkersBridge, isaac::alice::JsonToProto, isaac::alice::LifecycleReport, isaac::alice::MessageLedger, isaac::alice::MessagePassingReport, isaac::alice::NodeStatistics, isaac::alice::Pose, isaac::alice::Pose2Comparer, isaac::alice::PoseFromFile, isaac::alice::PoseInitializer, isaac::alice::PoseMessageInjector, isaac::alice::PoseToFile, isaac::alice::PoseToMessage, isaac::alice::PoseTree, isaac::alice::PoseTreeJsonBridge, isaac::alice::PoseTreeRelink, isaac::alice::ProtoToJson, isaac::alice::PyCodelet, isaac::alice::Random, isaac::alice::Recorder, isaac::alice::RecorderBridge, isaac::alice::Replay, isaac::alice::ReplayBridge, isaac::alice::Scheduling, isaac::alice::Sight, isaac::alice::SightChannelStatus, isaac::alice::Subgraph, isaac::alice::Subprocess, isaac::alice::TcpPublisher, isaac::alice::TcpSubscriber, isaac::alice::Throttle, isaac::alice::TimeOffset, isaac::alice::TimeSynchronizer, isaac::alice::UdpPublisher, isaac::alice::UdpSubscriber, isaac::map::Map, isaac::map::ObstacleAtlas, isaac::map::OccupancyGridMapLayer, isaac::map::PolygonMapLayer, isaac::map::WaypointMapLayer, isaac::navigation::DistanceMap, isaac::navigation::NavigationMap, isaac::navigation::RangeScanModelClassic, isaac::navigation::RangeScanModelFlatloc, isaac::rgbd_processing::DepthEdges, isaac::rgbd_processing::DepthImageFlattening, isaac::rgbd_processing::DepthImageToPointCloud, isaac::rgbd_processing::DepthNormals, isaac::rgbd_processing::DepthPoints, isaac::rgbd_processing::FreespaceFromDepth, isaac::sight::AliceSight, isaac::sight::SightWidget, isaac::sight::WebsightServer, isaac::viewers::BinaryMapViewer, isaac::viewers::ColorCameraViewer, isaac::viewers::DepthCameraViewer, isaac::viewers::Detections3Viewer, isaac::viewers::DetectionsViewer, isaac::viewers::FiducialsViewer, isaac::viewers::FlatscanViewer, isaac::viewers::GoalViewer, isaac::viewers::ImageKeypointViewer, isaac::viewers::LidarViewer, isaac::viewers::MosaicViewer, isaac::viewers::ObjectViewer, isaac::viewers::OccupancyMapViewer, isaac::viewers::PointCloudViewer, isaac::viewers::PoseTrailViewer, isaac::viewers::SegmentationCameraViewer, isaac::viewers::SegmentationViewer, isaac::viewers::SkeletonViewer, isaac::viewers::TensorViewer, isaac::viewers::TrajectoryListViewer, 0m
33m2020-06-15 17:18:20.723 WARN engine/alice/application.cpp#164: The function Application::findComponentByName is deprecated. Please use `getNodeComponentOrNull` instead. Note that the new method requires a node name instead of a component name. (argument: 'websight/isaac.sight.AliceSight')0m
0m2020-06-15 17:18:20.723 INFO engine/alice/application.cpp#255: Starting application 'realsense_camera' (instance UUID: 'e24992d0-af66-11ea-8bcf-c957460c567e') ...0m
90m2020-06-15 17:18:20.723 DEBUG engine/gems/scheduler/execution_groups.cpp#476: Launching 0 pre-start job(s)0m
90m2020-06-15 17:18:20.723 DEBUG engine/gems/scheduler/execution_groups.cpp#485: Replaying 0 pre-start event(s)0m
90m2020-06-15 17:18:20.723 DEBUG engine/gems/scheduler/execution_groups.cpp#476: Launching 0 pre-start job(s)0m
90m2020-06-15 17:18:20.723 DEBUG engine/gems/scheduler/execution_groups.cpp#485: Replaying 0 pre-start event(s)0m
0m2020-06-15 17:18:20.723 INFO engine/alice/backend/asio_backend.cpp#33: Starting ASIO service0m
0m2020-06-15 17:18:20.727 INFO packages/sight/WebsightServer.cpp#216: Sight webserver is loaded0m
0m2020-06-15 17:18:20.727 INFO packages/sight/WebsightServer.cpp#217: Please open Chrome Browser and navigate to http://<ip address>:30000m
33m2020-06-15 17:18:20.727 WARN engine/alice/backend/codelet_canister.cpp#225: Codelet 'websight/isaac.sight.AliceSight' was not added to scheduler because no tick method is specified.0m
33m2020-06-15 17:18:20.728 WARN engine/alice/components/Codelet.cpp#53: Function deprecated. Set tick_period to the desired tick paramater0m
33m2020-06-15 17:18:20.728 WARN engine/alice/backend/codelet_canister.cpp#225: Codelet '_check_operating_system/isaac.alice.CheckOperatingSystem' was not added to scheduler because no tick method is specified.0m
33m2020-06-15 17:18:20.728 WARN engine/alice/components/Codelet.cpp#53: Function deprecated. Set tick_period to the desired tick paramater0m
33m2020-06-15 17:18:20.730 WARN engine/alice/components/Codelet.cpp#53: Function deprecated. Set tick_period to the desired tick paramater0m
1;31m2020-06-15 17:18:20.741 ERROR engine/alice/components/Codelet.cpp#229: Component 'camera/realsense' of type 'isaac::RealsenseCamera' reported FAILURE:
No device connected, please connect a RealSense device
0m
1;31m2020-06-15 17:18:20.741 ERROR engine/alice/backend/event_manager.cpp#42: Stopping node 'camera' because it reached status 'FAILURE'0m
33m2020-06-15 17:18:20.743 WARN engine/alice/backend/codelet_canister.cpp#225: Codelet 'camera/realsense' was not added to scheduler because no tick method is specified.0m
0m2020-06-15 17:18:21.278 INFO packages/sight/WebsightServer.cpp#113: Server connected / 10m
0m2020-06-15 17:18:30.723 INFO engine/alice/backend/allocator_backend.cpp#57: Optimized memory CPU allocator.0m
0m2020-06-15 17:18:30.724 INFO engine/alice/backend/allocator_backend.cpp#66: Optimized memory CUDA allocator.0m
I am running locally Kafka using the confluentinc/cp-kafka Docker image and I am setting the following logging container environment variables:
KAFKA_LOG4J_ROOT_LOGLEVEL: ERROR
KAFKA_LOG4J_LOGGERS: >-
org.apache.zookeeper=ERROR,
org.apache.kafka=ERROR,
kafka=ERROR,
kafka.cluster=ERROR,
kafka.controller=ERROR,
kafka.coordinator=ERROR,
kafka.log=ERROR,
kafka.server=ERROR,
kafka.zookeeper=ERROR,
state.change.logger=ERROR
and I see in the Kafka logs that Kafka is starting with the following configuration:
===> ENV Variables ...
ALLOW_UNSIGNED=false
COMPONENT=kafka
CONFLUENT_DEB_VERSION=1
CONFLUENT_PLATFORM_LABEL=
CONFLUENT_VERSION=5.4.1
...
KAFKA_LOG4J_LOGGERS=org.apache.zookeeper=ERROR, org.apache.kafka=ERROR, kafka=ERROR, kafka.cluster=ERROR, kafka.controller=ERROR, kafka.coordinator=ERROR, kafka.log=ERROR, kafka.server=ERROR, kafka.zookeeper=ERROR, state.change.logger=ERROR
KAFKA_LOG4J_ROOT_LOGLEVEL=ERROR
...
Still I see further down in the logs the INFO and TRACE log levels. For example:
[2020-03-26 16:22:12,838] INFO [Controller id=1001] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController)
[2020-03-26 16:22:12,848] INFO [Controller id=1001] Partitions undergoing preferred replica election: (kafka.controller.KafkaController)
[2020-03-26 16:22:12,849] INFO [Controller id=1001] Partitions that completed preferred replica election: (kafka.controller.KafkaController)
[2020-03-26 16:22:12,855] INFO [Controller id=1001] Skipping preferred replica election for partitions due to topic deletion: (kafka.controller.KafkaController)
How can I really deactivate the logs below a certain level? In the example above, I really want only ERROR logs.
The approach above is the way described in the Confluent documentation.
And the Apache Kafka source code lists all sorts of loggers that I could not influence using the KAFKA_LOG4J_LOGGERS Docker environment variable.
I went and troubleshot the Dockerfile's and inspected the Kafka container. The cause of this behaviour was the YAML multiline string folding.
Hence the provided environment variable (using a YAML multiline value) is at runtime:
KAFKA_LOG4J_LOGGERS=org.apache.zookeeper=ERROR, org.apache.kafka=ERROR, kafka=ERROR, kafka.cluster=ERROR, kafka.controller=ERROR, kafka.coordinator=ERROR, kafka.log=ERROR, kafka.server=ERROR, kafka.zookeeper=ERROR, state.change.logger=ERROR
instead of (no spaces in between):
KAFKA_LOG4J_LOGGERS=org.apache.zookeeper=ERROR,org.apache.kafka=ERROR, kafka=ERROR, kafka.cluster=ERROR,kafka.controller=ERROR, kafka.coordinator=ERROR,kafka.log=ERROR,kafka.server=ERROR,kafka.zookeeper=ERROR,state.change.logger=ERROR
And this was visible inside the container in the generated /etc/kafka/log4j.properties file:
log4j.rootLogger=ERROR, stdout
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=[%d] %p %m (%c)%n
log4j.logger.kafka.authorizer.logger=WARN
log4j.logger.kafka.cluster=ERROR
log4j.logger.kafka.producer.async.DefaultEventHandler=DEBUG
log4j.logger.kafka.zookeeper=ERROR
log4j.logger.org.apache.kafka=ERROR
log4j.logger.kafka.coordinator=ERROR
log4j.logger.org.apache.zookeeper=ERROR
log4j.logger.kafka.log.LogCleaner=INFO
log4j.logger.kafka.controller=ERROR
log4j.logger.kafka=INFO
log4j.logger.kafka.log=ERROR
log4j.logger.state.change.logger=ERROR
log4j.logger.kafka=ERROR
log4j.logger.kafka.server=ERROR
log4j.logger.kafka.controller=TRACE
log4j.logger.kafka.network.RequestChannel$=WARN
log4j.logger.kafka.request.logger=WARN
log4j.logger.state.change.logger=TRACE
If you really need to split the long line in a YAML multiline value, you would have to use this YAML syntax.
More hints from the code:
here is where the log4j.properties file is generated when a confluent container is run.
these are the default log levels that Kafka will start with.
these should be all the loggers supported by Kafka
I have a JAVA application which I want to monitor its JMX attributes using telegraf tool.
The tool provides jolikia plugin to monitor JMX attributes. I have added following dependencies to my app's pom.xml file regarding Maven section of Jolokia documentation:
<dependency>
<groupId>org.jolokia</groupId>
<artifactId>jolokia-core</artifactId>
<version>1.3.7</version>
</dependency>
<dependency>
<groupId>org.jolokia</groupId>
<artifactId>jolokia-client-java</artifactId>
<version>1.3.7</version>
</dependency>
This is my /etc/telegraf/telegraf.conf file:
[[inputs.jolokia]]
context = "/jolokia/"
[[inputs.jolokia.servers]]
name = "wr-core"
host = "192.168.100.175"
port = "1998"
[[inputs.jolokia.metrics]]
name = "send_success"
mbean = "wr-core:type=monitor,name=execution"
attribute = "MessageSendSuccessCount"
The application is up in the provided IP/port (I can connect to it with jconsole). The application has a monitoring section which its object name (as shown in jconsole) is wr-core:type=monitor,name=execution and has the attribute MessageSendSuccessCount. But when I start telegraf service, following error occurs:
Jan 14 14:30:32 ZiZi telegraf[17258]: 2018-01-14T11:00:32Z E! Error in plugin [inputs.jolokia]: error performing request: Error decoding JSON response: invalid character '\x00' looking for beginning of value:
Note that 1998 is my app's jmx port. I also tried using 8778 which is jolokia-agent port; got:
Jan 14 14:40:03 ZiZi telegraf[9150]: 2018-01-14T11:10:03Z E! Error in plugin [inputs.jolokia]: error performing request: Post http://192.168.100.175:8778/jolokia/: dial tcp 192.168.100.175:8778: getsockopt: connection refused
EDIT 1:
I have checked my CLASSPATH and both jolokia-client and jolokia-core were listed: ../lib/jolokia-client-java-1.3.7.jar:../lib/jolokia-core-1.3.7.jar.
EDIT 2:
I have put following lines into my app's execution file:
JOLOKIA_OPTS=-javaagent:$LIB_PATH/jolokia-core-java-1.3.7.jar=port=8778,host=0.0.0.0
JAVA_OPTS="-mx4096M $JAVA_OPTS $JACOCO_OPTS $JOLOKIA_OPTS"
But when I run the file, I get this error (even though ../lib/jolokia-core-java-1.3.7.jar has been listed in the CLASSPATH):
Error opening zip file or JAR manifest missing : ../lib/jolokia-core-java-1.3.7.jar
Error occurred during initialization of VM
agent library failed to init: instrument
Found the solution.
I have skipped maven solution and tried the javaagent approach, but I had misunderstood the usage of javaagent previously; I should address jolokia jvm agent (this helped):
JOLOKIA_OPTS=-javaagent:/root/jolokia-jvm-1.3.7-agent.jar=port=8778,host=0.0.0.0
JAVA_OPTS="-mx4096M $JAVA_OPTS $JACOCO_OPTS $JOLOKIA_OPTS"
Now, my app starts with this log (successfully):
I> No access restrictor found, access to any MBean is allowed
Jolokia: Agent started with URL http://192.168.100.175:8778/jolokia/
In the other side there is no error in the telegraf console for jolokia anymore.
All the observation are implying the jolokia jvm library has been started and works successfully.
I have also found jolokia jmx documentation for using it as dependencies in the project; but since I'm not a JAVA expert (I'm testing the app), I prefer to currently use the javaagent approach and leave it for future study/experience. BTW, it may help the others.
EDIT 1:
I have found and deployed jolokia jvm agent using its spring support.
Configuring it in the spring XML file, I can now have jolokia jvm agent start listening at my app's startup.
Using Eclipse on Windows, a vertx Verticle with a misconfigured cluster.xml shows the following error in the Eclipse console:
11:46:18.536 [hz._hzInstance_1_dev.generic-operation.thread-0] ERROR com.hazelcast.cluster - [192.168.25.8]:5701 [dev] [3.5.2] Node could not join cluster. A Configuration mismatch was detected: Incompatible joiners! expected: multicast, found: tcp-ip Node is going to shutdown now!
11:46:22.529 [vert.x-worker-thread-0] ERROR com.hazelcast.cluster.impl.TcpIpJoiner - [192.168.25.8]:5701 [dev] [3.5.2] com.hazelcast.core.HazelcastInstanceNotActiveException: Hazelcast instance is not active!
This is fine, I know to reconfigure the cluster for multicast. The problem is when I deploy the same code and configuration to Linux, and run it as a fat jar then the same log doesn't show either the hz thread or the vertx worker thread logs. Instead it shows the verticle logs as:
2015-11-05 12:03:09,329 Starting clustered Vertx
2015-11-05 12:03:13,549 ERROR: VerticleService failed to start: java.lang.NullPointerException
So if I run on Linux the log to tell me there's a misconfiguration isn't showing. There's something I am missing in the vertx / maven log config but I don't know what. Maven properties are as follows:
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<exec.mainClass>main.java.eiger.isct.service.Verticle</exec.mainClass>
<log4j.configurationFile>log4j2.xml</log4j.configurationFile>
<hazelcast.logging.type>log4j2</hazelcast.logging.type>
</properties>
and I start the fat jar using:
java -Dlog4j.configuration=log4j2.xml -jar Verticle-0.5-SNAPSHOT-fat.jar
How can I get the hz thread and vertx thread to log on Linux?
I've tried adding a vertx-default-jul-logging.properties file below to the maven resources dir but no luck.
com.hazelcast.level=ALL
java.util.logging.ConsoleHandler.level=ALL
java.util.logging.FileHandler.level=ALL
THANKS for your comment.
Vertx has started logging having added
-Djava.util.logging.config.file=../logging.properties
to the java start command and with the default logging.properties like (and this is a nice config for lower level stuff):
handlers=java.util.logging.ConsoleHandler,java.util.logging.FileHandler
java.util.logging.SimpleFormatter.format=%1$tY-%1$tm-%1$td %1$tH:%1$tM:%1$tS:%1$tL %4$s %2$s %5$s%6$s%n
java.util.logging.ConsoleHandler.formatter=java.util.logging.SimpleFormatter
java.util.logging.ConsoleHandler.level=ALL
java.util.logging.FileHandler.level=ALL
java.util.logging.FileHandler.formatter=java.util.logging.SimpleFormatter
java.util.logging.FileHandler.pattern=../logs/vertx.log
.level=ALL
io.vertx.level=ALL
com.hazelcast.level=ALL
io.netty.util.internal.PlatformDependent.level=ALL
and vertx is logging to ../logs/vertx.log on Linux
I struggling to get the "Musicbrainz" data set running on my local machine (MacBook Air, Mountain Lion).
What I did? (Basically what the video said)
Downloaded data set directly from http://www.neo4j.org/develop/example_data
Downloaded fresh Neo4j-Community-2.1.5
Extracted the data set, renamed it to graph.db and copied to /neo4j-community-2.1.5/data/graph.db
Run the server: ./bin/neo4j console, but neo does not start - not even an exception is thrown!!!!
Gerardos-MacBook-Air:neo4j-community-2.1.5 Gery$ ./bin/neo4j console
WARNING: Max 256 open files allowed, minimum of 40 000 recommended.
See the Neo4j manual. Starting Neo4j Server console-mode... Using
additional JVM arguments: -server -XX:+DisableExplicitGC
-Dorg.neo4j.server.properties=conf/neo4j-server.properties -Djava.util.logging.config.file=conf/logging.properties -Dlog4j.configuration=file:conf/log4j.properties -XX:+UseConcMarkSweepGC -XX:+CMSClassUnloadingEnabled 2014-10-09 15:40:31.700+0000 INFO [API] Setting startup timeout to: 120000ms
based on -1 Detected incorrectly shut down database, performing
recovery..
I tried several settings in neo4j.properties, but nothing, see properties below ...
use_memory_mapped_buffers=true
neostore.nodestore.db.mapped_memory=300M
neostore.relationshipstore.db.mapped_memory=3G
neostore.propertystore.db.mapped_memory=500M
neostore.propertystore.db.strings.mapped_memory=500M
neostore.propertystore.db.arrays.mapped_memory=0M
neostore.propertystore.db.index.keys.mapped_memory=15M
neostore.propertystore.db.index.mapped_memory=15M
allow_store_upgrade=true
I was hoping that this would work out of the box. What am I missing ...?