I need help with this problem. I need someone to explain to me why is this happening and how to prevent or avoid it.
Exception in thread "Thread-747" java.lang.OutOfMemoryError: PermGen space
Exception in thread "Thread-748" java.lang.OutOfMemoryError: PermGen space
Exception in thread "Thread-759" java.lang.OutOfMemoryError: PermGen space
Exception in thread "Thread-760" java.lang.OutOfMemoryError: PermGen space
Exception in thread "Thread-764" java.lang.OutOfMemoryError: PermGen space
Exception in thread "Thread-765" java.lang.OutOfMemoryError: PermGen space
Exception in thread "Thread-766" java.lang.OutOfMemoryError: PermGen space
Exception in thread "Thread-767" java.lang.OutOfMemoryError: PermGen space
Exception in thread "Thread-773" java.lang.OutOfMemoryError: PermGen space
Exception in thread "Thread-774" java.lang.OutOfMemoryError: PermGen space
Exception in thread "Thread-780" java.lang.OutOfMemoryError: PermGen space
Exception in thread "Thread-781" java.lang.OutOfMemoryError: PermGen space
Exception in thread "Thread-788" java.lang.OutOfMemoryError: PermGen space
Exception in thread "Thread-789" java.lang.OutOfMemoryError: PermGen space
2011-06-20 14:42:10,668 [http-8080-6] ERROR [/CM].[grails] - Servlet.service() for servlet grails threw exception
java.lang.OutOfMemoryError: PermGen space
2011-06-20 14:42:10,668 [http-8080-6] ERROR [/CM].[default] - Servlet.service() for servlet default threw exception
java.lang.OutOfMemoryError: PermGen space
: java.lang.OutOfMemoryError: PermGen space
at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:116)
at _GrailsPackage_groovy$_run_closure8.doCall(_GrailsPackage_groovy:275)
at _GrailsPackage_groovy$_run_closure8.call(_GrailsPackage_groovy)
at _GrailsRun_groovy$_run_closure8.doCall(_GrailsRun_groovy:245)
at RunApp$_run_closure1.doCall(RunApp.groovy:35)
at gant.Gant$_dispatch_closure5.doCall(Gant.groovy:381)
at gant.Gant$_dispatch_closure7.doCall(Gant.groovy:415)
at gant.Gant$_dispatch_closure7.doCall(Gant.groovy)
at gant.Gant.withBuildListeners(Gant.groovy:427)
at gant.Gant.this$2$withBuildListeners(Gant.groovy)
at gant.Gant$this$2$withBuildListeners.callCurrent(Unknown Source)
at gant.Gant.dispatch(Gant.groovy:415)
at gant.Gant.this$2$dispatch(Gant.groovy)
at gant.Gant.invokeMethod(Gant.groovy)
at gant.Gant.executeTargets(Gant.groovy:590)
at gant.Gant.executeTargets(Gant.groovy:589)
Caused by: java.lang.OutOfMemoryError: PermGen space
--- Nested Exception ---
java.lang.OutOfMemoryError: PermGen space
Error automatically restarting container: java.lang.OutOfMemoryError: PermGen space
Error executing script RunApp: PermGen space
java.lang.OutOfMemoryError: PermGen space
Error executing script RunApp: PermGen space
Application context shutting down...
Application context shutdown.
The PermGen is a region of your JVM's memory that is used to load classes.
As you application executes, it uses more and more of this memory, especially if you are in a debugging environment, of if you make extensive use of closures.
The way to fix this is to add more of it!
This is done by passing one or two parameters to the JVM when launching your application.
The parameters are :
-XX:MaxPermSize=256m
-XX:PermSize=128m
(adjust the values to your specific needs)
The PermSize will be the initial size of the PermGen, and the MaxPermSize will be the maximum size at which it will increase before throwing you an exception like in your post.
By default, it is set to 64M, which is not much if you have a 'real' application.
PAY ATTENTION : Your total memory usage will be:Heap size + Perm Size
If you are using Servlet Version 3.0 then even increasing your memory won't be of any help since it's a problem with the groovy compiler. The new version 1.8.2/1.9 (?) which will be released soon will resolve this issue. In the meantime you can change the servlet version back to "2.5" (in BuildConfig.groovy) which will resolve this issue.
The drawback of changing the servlet version to 2.5 is that it cannot be deployed to Glassfish application server so the ugly workaround is change to 2.5 and use "run-app". When you want to deploy to glassfish change the servlet version to "3.0" in BuildConfig.groovy, run "war" and then deploy the war to Glassfish.
Change it back to "2.5" to run on local dev machine again.
Check the FAQ
Q: OMG I get OutOfMemoryErrors or PermGen Space errors when running Grails in development mode. What do I do?
Since Grails 0.6, Grails automatically re-compiles Java sources and
domain classes using pre-compilation and then a server restart. This
can lead to permgen space running out if the server is run for a long
time and many changes are made. You can disable this feature if it is
not important to you with:
grails -Ddisable.auto.recompile=true run-app
There is also a problem with Grails 0.6 on Windows where you get
OutOfMemoryErrors during a period of activity in development mode due
to the re-compilation. This may be solved in SVN head, but if you see
this problem the above option can also help.
Easiest just to restart your application server when it happens.
In the STS IDE set as per following:
-XX:MaxPermSize=512m -XX:PermSize=128m
I hope it helps.
Related
Weblogic is not coming up . It is giving following stack trace . Can any one help in solving that ?
<Jun 20, 2018 1:04:27,029 PM UTC> <Critical> <WebLogicServer> <BEA-000386> <Server subsystem failed. Reason: A MultiException has 4 exceptions. They are:
1. java.lang.ExceptionInInitializerError
2. java.lang.IllegalStateException: Unable to perform operation: post construct on weblogic.rjvm.RJVMService
3. java.lang.IllegalArgumentException: While attempting to resolve the dependencies of weblogic.protocol.ProtocolRegistrationService errors were found
4. java.lang.IllegalStateException: Unable to perform operation: resolve on weblogic.protocol.ProtocolRegistrationService
A MultiException has 4 exceptions. They are:
1. java.lang.ExceptionInInitializerError
2. java.lang.IllegalStateException: Unable to perform operation: post construct on weblogic.rjvm.RJVMService
3. java.lang.IllegalArgumentException: While attempting to resolve the dependencies of weblogic.protocol.ProtocolRegistrationService errors were found
4. java.lang.IllegalStateException: Unable to perform operation: resolve on weblogic.protocol.ProtocolRegistrationService
at org.jvnet.hk2.internal.Collector.throwIfErrors(Collector.java:89)
at org.jvnet.hk2.internal.ClazzCreator.resolveAllDependencies(ClazzCreator.java:250)
at org.jvnet.hk2.internal.ClazzCreator.create(ClazzCreator.java:358)
at org.jvnet.hk2.internal.SystemDescriptor.create(SystemDescriptor.java:487)
at org.glassfish.hk2.runlevel.internal.AsyncRunLevelContext.findOrCreate(AsyncRunLevelContext.java:305)
Truncated. see log file for complete stacktrace
Caused By: java.lang.ExceptionInInitializerError
at weblogic.utils.net.AddressUtils.getIPForLocalHost(AddressUtils.java:163)
at weblogic.rjvm.JVMID.setLocalID(JVMID.java:278)
at weblogic.rjvm.RJVMService.setJVMID(RJVMService.java:72)
at weblogic.rjvm.RJVMService.start(RJVMService.java:54)
at weblogic.server.AbstractServerService.postConstruct(AbstractServerService.java:76)
Truncated. see log file for complete stacktrace
Caused By: java.lang.NullPointerException
at weblogic.utils.net.AddressUtils$AddressMaker.getAllAddresses(AddressUtils.java:62)
at weblogic.utils.net.AddressUtils$AddressMaker.<clinit>(AddressUtils.java:45)
at weblogic.utils.net.AddressUtils.getIPForLocalHost(AddressUtils.java:163)
at weblogic.rjvm.JVMID.setLocalID(JVMID.java:278)
at weblogic.rjvm.RJVMService.setJVMID(RJVMService.java:72)
Truncated. see log file for complete stacktrace
>
The WebLogic Server encountered a critical failure
Reason: Assertion violated
Stopping Derby server...
Derby server stopped.
Actually there was an interface resolution problem inside the docker container which was causing this
Make sure following points for resolution :
1) Cat /etc/hosts should have entry corresponding to localhost
2) docker0 interface should be in up state
While Building a project in jenkins I am getting an Error of OutofBoundMemory.
the log is like this..
Exception in thread "main" java.lang.OutOfMemoryError: GC overhead limit exceeded
at org.sonar.sslr.internal.vm.Machine.createNode(Machine.java:256)
at org.sonar.sslr.internal.vm.Instruction$RetInstruction.execute(Instruction.java:305)
at org.sonar.sslr.internal.vm.Machine.execute(Machine.java:162)
at org.sonar.sslr.internal.vm.Machine.execute(Machine.java:106)
at org.sonar.sslr.internal.vm.Machine.parse(Machine.java:84)
at org.sonar.sslr.parser.ParseRunner.parse(ParseRunner.java:45)
at com.sonar.sslr.api.typed.ActionParser.parse(ActionParser.java:102)
at com.sonar.sslr.api.typed.ActionParser.parse(ActionParser.java:91)
at org.sonar.php.PHPAnalyzer.nextFile(PHPAnalyzer.java:71)
at org.sonar.plugins.php.PHPSensor.analyseFile(PHPSensor.java:142)
at org.sonar.plugins.php.PHPSensor.analyseFiles(PHPSensor.java:124)
at org.sonar.plugins.php.PHPSensor.analyse(PHPSensor.java:115)
at org.sonar.batch.phases.SensorsExecutor.executeSensor(SensorsExecutor.java:58)
at org.sonar.batch.phases.SensorsExecutor.execute(SensorsExecutor.java:50)
at org.sonar.batch.phases.AbstractPhaseExecutor.execute(AbstractPhaseExecutor.java:83)
at org.sonar.batch.scan.ModuleScanContainer.doAfterStart(ModuleScanContainer.java:192)
at org.sonar.core.platform.ComponentContainer.startComponents(ComponentContainer.java:142)
at org.sonar.core.platform.ComponentContainer.execute(ComponentContainer.java:127)
at org.sonar.batch.scan.ProjectScanContainer.scan(ProjectScanContainer.java:241)
at org.sonar.batch.scan.ProjectScanContainer.scanRecursively(ProjectScanContainer.java:236)
at org.sonar.batch.scan.ProjectScanContainer.doAfterStart(ProjectScanContainer.java:226)
at org.sonar.core.platform.ComponentContainer.startComponents(ComponentContainer.java:142)
at org.sonar.core.platform.ComponentContainer.execute(ComponentContainer.java:127)
at org.sonar.batch.task.ScanTask.execute(ScanTask.java:47)
at org.sonar.batch.task.TaskContainer.doAfterStart(TaskContainer.java:86)
at org.sonar.core.platform.ComponentContainer.startComponents(ComponentContainer.java:142)
at org.sonar.core.platform.ComponentContainer.execute(ComponentContainer.java:127)
at org.sonar.batch.bootstrap.GlobalContainer.executeTask(GlobalContainer.java:106)
at org.sonar.batch.bootstrapper.Batch.executeTask(Batch.java:119)
at org.sonarsource.scanner.api.internal.batch.BatchIsolatedLauncher.execute(BatchIsolatedLauncher.java:62)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
Even the build continues for I guess long time.. I am using sonarqube runner
What should I do?
In the build step's "JVM Options" input, you need to specify more memory (than the default) for the process. Here's what that might look like:
-Xmx2g -Xms512m -XX:MaxPermSize=512m
Note that you should adjust these values based on your system resources.
I have cassandra docker container, which did not start after hard restart pc.
The command line output is very long. This are some snippets
java.lang.OutOfMemoryError: Java heap space
ERROR 20:08:26 JVM state determined to be unstable. Exiting forcefully due to:
java.lang.OutOfMemoryError: Java heap space
Caused by: java.io.IOException: Corrupt (negative) value length encountered
ERROR 20:08:29 JVM state determined to be unstable. Exiting forcefully due to:
java.lang.OutOfMemoryError: Java heap space
at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)ERROR 20:08:29 JVM state determined to be unstable. Exiting forcefully due to:
java.lang.OutOfMemoryError: Java heap space
ERROR 20:08:29 JVM state determined to be unstable. Exiting forcefully due to:
java.lang.OutOfMemoryError: Java heap space
ERROR 20:08:29 JVM state determined to be unstable. Exiting forcefully due to:
java.lang.OutOfMemoryError: Java heap space
ERROR 20:08:29 JVM state determined to be unstable. Exiting forcefully due to:
java.lang.OutOfMemoryError: Java heap space
ERROR 20:08:29 JVM state determined to be unstable. Exiting forcefully due to:
java.lang.OutOfMemoryError: Java heap space
ERROR 20:08:29 JVM state determined to be unstable. Exiting forcefully due to:
java.lang.OutOfMemoryError: Java heap space
ERROR 20:08:29 JVM state determined to be unstable. Exiting forcefully due to:
java.lang.OutOfMemoryError: Java heap space
ERROR 20:08:29 Unknown exception caught while attempting to update MaterializedView! findkita.kitas
java.lang.OutOfMemoryError: Java heap space
ERROR 20:08:29 Unknown exception caught while attempting to update MaterializedView! findkita.kitas
java.lang.OutOfMemoryError: Java heap space
ERROR 20:08:29 Unknown exception caught while attempting to update MaterializedView! findkita.kitas
java.lang.OutOfMemoryError: Java heap space
ERROR 20:08:29 Unknown exception caught while attempting to update MaterializedView! findkita.kitas
java.lang.OutOfMemoryError: Java heap space
WARN 20:08:29 ConcurrentMarkSweep GC in 5961ms. CMS Old Gen: 885922704 -> 895736336; Par Survivor Space: 18224024 -> 0
ERROR 20:08:26 Unknown exception caught while attempting to update MaterializedView! findkita.kitas
Thats are some of the error I could pick out. But I do not know how to solve theme. Because of error cointainer do not start, so I could not touch cassandra directly.
One error is repead every time:
ERROR 20:08:26 Unknown exception caught while attempting to update MaterializedView! findkita.kitas
We are trying to add a new Solr node to our cluster:
DC Cassandra
Cassandra node 1
DC Solr
Solr node 1 <-- new node (actually, a replacement for an old node)
Solr node 2
Solr node 3
Solr node 4
Solr node 5
During the bootstrap process:
The stream from node 3 to node 1 failed with an exception:
ERROR [STREAM-OUT-/IP_OF_NODE1] 2014-04-01 01:14:40,887 CassandraDaemon.java (line 196) Exception in thread Thread[STREAM-OUT-/IP_OF_NODE1,5,main]
java.lang.NullPointerException
at org.apache.cassandra.streaming.ConnectionHandler$MessageHandler.signalCloseDone(ConnectionHandler.java:249)
at org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.run(ConnectionHandler.java:375)
at java.lang.Thread.run(Thread.java:744)
The stream from node 4 to node 1 never started. The last relevant line in node 4's system.log is:
Received streaming plan for Bootstrap.
It should have been followed by:
Prepare completed. Receiving 0 files(0 bytes), sending x files(y bytes)
It seems that the bootstrap process is now stalled because the data file sizes are not changing anymore. How can I force those streams to be retried?
EDIT:
I restarted all nodes today in an attempt to force new node to retry the bootstrap process. Unfortunately, it encountered some stream failures again. This time, the exception in node 1 is as follows:
WARN [STREAM-IN-/IP_OF_NODE3] 2014-04-06 20:48:17,963 StreamSession.java (line 532) [Stream #c84effb0-bda9-11e3-a07d-89325af2f6bf] Retrying for following error
java.lang.RuntimeException: java.io.FileNotFoundException: /home/cassandra/data/my_keyspace/my_table/my_keyspace-my_table-tmp-jb-1209-Data.db (Too many open files)
at org.apache.cassandra.io.util.SequentialWriter.<init>(SequentialWriter.java:75)
at org.apache.cassandra.io.compress.CompressedSequentialWriter.<init>(CompressedSequentialWriter.java:71)
at org.apache.cassandra.io.compress.CompressedSequentialWriter.open(CompressedSequentialWriter.java:42)
at org.apache.cassandra.io.sstable.SSTableWriter.<init>(SSTableWriter.java:107)
at org.apache.cassandra.io.sstable.SSTableWriter.<init>(SSTableWriter.java:60)
at org.apache.cassandra.streaming.StreamReader.createWriter(StreamReader.java:111)
at org.apache.cassandra.streaming.compress.CompressedStreamReader.read(CompressedStreamReader.java:65)
at org.apache.cassandra.streaming.messages.IncomingFileMessage$1.deserialize(IncomingFileMessage.java:47)
at org.apache.cassandra.streaming.messages.IncomingFileMessage$1.deserialize(IncomingFileMessage.java:37)
at org.apache.cassandra.streaming.messages.StreamMessage.deserialize(StreamMessage.java:55)
at org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:283)
at java.lang.Thread.run(Thread.java:724)
Caused by: java.io.FileNotFoundException: /home/cassandra/data/my_keyspace/my_table/my_keyspace-my_table-tmp-jb-1209-Data.db (Too many open files)
at java.io.RandomAccessFile.open(Native Method)
at java.io.RandomAccessFile.<init>(RandomAccessFile.java:233)
at org.apache.cassandra.io.util.SequentialWriter.<init>(SequentialWriter.java:71)
ERROR [STREAM-IN-/78.46.63.218] 2014-04-06 20:48:17,964 StreamSession.java (line 418) [Stream #c84effb0-bda9-11e3-a07d-89325af2f6bf] Streaming error occurred
java.lang.IllegalArgumentException: Unknown type 0
at org.apache.cassandra.streaming.messages.StreamMessage$Type.get(StreamMessage.java:89)
at org.apache.cassandra.streaming.messages.StreamMessage.deserialize(StreamMessage.java:54)
at org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:283)
at java.lang.Thread.run(Thread.java:724)
There are tons of similar errors in the log. e.g.:
ERROR [CompactionExecutor:129] 2014-04-06 20:50:06,401 CassandraDaemon.java (line 196) Exception in thread Thread[CompactionExecutor:129,1,main]
java.lang.RuntimeException: java.lang.RuntimeException: java.io.FileNotFoundException: /home/cassandra/data/my_keyspace/my_table/my_keyspace-my_table-jb-51-Data.db (Too many open files)
at org.apache.cassandra.service.pager.QueryPagers$1.next(QueryPagers.java:154)
at org.apache.cassandra.service.pager.QueryPagers$1.next(QueryPagers.java:137)
at org.apache.cassandra.db.Keyspace.indexRow(Keyspace.java:400)
at org.apache.cassandra.db.index.SecondaryIndexBuilder.build(SecondaryIndexBuilder.java:62)
at org.apache.cassandra.db.compaction.CompactionManager$9.run(CompactionManager.java:833)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:724)
Caused by: java.lang.RuntimeException: java.io.FileNotFoundException: /home/cassandra/data/my_keyspace/my_table/my_keyspace-my_table-jb-51-Data.db (Too many open files)
at org.apache.cassandra.io.compress.CompressedRandomAccessReader.open(CompressedRandomAccessReader.java:47)
at org.apache.cassandra.io.util.CompressedPoolingSegmentedFile.createReader(CompressedPoolingSegmentedFile.java:48)
at org.apache.cassandra.io.util.PoolingSegmentedFile.getSegment(PoolingSegmentedFile.java:39)
at org.apache.cassandra.io.sstable.SSTableReader.getFileDataInput(SSTableReader.java:1195)
at org.apache.cassandra.db.columniterator.SimpleSliceReader.<init>(SimpleSliceReader.java:57)
at org.apache.cassandra.db.columniterator.SSTableSliceIterator.createReader(SSTableSliceIterator.java:65)
at org.apache.cassandra.db.columniterator.SSTableSliceIterator.<init>(SSTableSliceIterator.java:42)
at org.apache.cassandra.db.filter.SliceQueryFilter.getSSTableColumnIterator(SliceQueryFilter.java:167)
at org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(QueryFilter.java:62)
at org.apache.cassandra.db.CollationController.collectAllData(CollationController.java:250)
at org.apache.cassandra.db.CollationController.getTopLevelColumns(CollationController.java:53)
at org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1550)
at org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1379)
at org.apache.cassandra.db.Keyspace.getRow(Keyspace.java:327)
at org.apache.cassandra.db.SliceFromReadCommand.getRow(SliceFromReadCommand.java:65)
at org.apache.cassandra.service.pager.SliceQueryPager.queryNextPage(SliceQueryPager.java:77)
at org.apache.cassandra.service.pager.AbstractQueryPager.fetchPage(AbstractQueryPager.java:84)
at org.apache.cassandra.service.pager.SliceQueryPager.fetchPage(SliceQueryPager.java:33)
at org.apache.cassandra.service.pager.QueryPagers$1.next(QueryPagers.java:148)
... 10 more
Caused by: java.io.FileNotFoundException: /home/cassandra/data/my_keyspace/my_table/my_keyspace-my_table-jb-51-Data.db (Too many open files)
at java.io.RandomAccessFile.open(Native Method)
at java.io.RandomAccessFile.<init>(RandomAccessFile.java:233)
at org.apache.cassandra.io.util.RandomAccessReader.<init>(RandomAccessReader.java:58)
at org.apache.cassandra.io.compress.CompressedRandomAccessReader.<init>(CompressedRandomAccessReader.java:76)
at org.apache.cassandra.io.compress.CompressedRandomAccessReader.open(CompressedRandomAccessReader.java:43)
... 28 more
This appears to be very similar to a Cassandra bug/issue:
https://issues.apache.org/jira/browse/CASSANDRA-6965
I'll follow up on that.
Meanwhile, you could run rebuild/repair on that new node.
EDIT: Another Cassandra issue that appears to be related:
CASSANDRA-6984 - "NullPointerException in Streaming During Repair"
https://issues.apache.org/jira/browse/CASSANDRA-6984
That issue is labeled as a Blocker, so it should get some prompt attention. I've inquired as to whether there is a workaround.
Stay tuned.
(Too many open files)
Looks like you need to increase your ulimit.
I installed neo4j 2.0.0 M06 version on my Ubuntu pc. It service worked fine, and I could use the new web browser perfectly.
Then, I used the sample java project (https://github.com/neo4j/neo4j/blob/2.0.0-M06/community/embedded-examples/src/main/java/org/neo4j/examples/EmbeddedNeo4jWithIndexing.java) to connect embedded way to the DB and add some nodes. (btw, I'm sure I stopped the neo4j service before launching the java application)
I changed the number of nodes added by the program to 100,000, and the application crashed on exceeding heap size (GC overhead limit).
Now, when trying to launch the neo4j I get a startup error :
2013-11-01 09:53:13.806+0000 DEBUG [API] Failed to start Neo Server on port [7474]
2013-11-01 10:00:52.865+0000 INFO [API] Setting startup timeout to: 120000ms based on -1
2013-11-01 10:00:52.998+0000 DEBUG [API]
org.neo4j.server.ServerStartupException: Starting Neo4j Server failed: org/neo4j/helpers /Settings
at org.neo4j.server.AbstractNeoServer.start(AbstractNeoServer.java:193) ~[neo4j- server-2.0.0-M06.jar:2.0.0-M06]
at org.neo4j.server.Bootstrapper.start(Bootstrapper.java:87) [neo4j-server-2.0.0- M06.jar:2.0.0-M06]
at org.neo4j.server.Bootstrapper.main(Bootstrapper.java:50) [neo4j-server-2.0.0- M06.jar:2.0.0-M06]
Caused by: java.lang.NoClassDefFoundError: org/neo4j/helpers/Settings
at org.neo4j.shell.ShellSettings.<clinit>(ShellSettings.java:42) ~[neo4j-shell- 2.0.0-M06.jar:2.0.0-M06]
at org.neo4j.server.database.CommunityDatabase.getDbTuningPropertiesWithServerDefaults(Communit yDatabase.java:106) ~[neo4j-server-2.0.0-M06.jar:2.0.0-M06]
at org.neo4j.server.enterprise.EnterpriseDatabase.start(EnterpriseDatabase.java:89) ~[neo4j-server-enterprise-2.0.0-M06.jar:2.0.0-M06]
at org.neo4j.server.AbstractNeoServer.start(AbstractNeoServer.java:141) ~[neo4j- server-2.0.0-M06.jar:2.0.0-M06]
... 2 common frames omitted
Caused by: java.lang.ClassNotFoundException: org.neo4j.helpers.Settings
at java.net.URLClassLoader$1.run(URLClassLoader.java:366) ~[na:1.7.0_45]
at java.net.URLClassLoader$1.run(URLClassLoader.java:355) ~[na:1.7.0_45]
at java.security.AccessController.doPrivileged(Native Method) ~[na:1.7.0_45]
at java.net.URLClassLoader.findClass(URLClassLoader.java:354) ~[na:1.7.0_45]
at java.lang.ClassLoader.loadClass(ClassLoader.java:425) ~[na:1.7.0_45]
at java.lang.ClassLoader.loadClass(ClassLoader.java:358) ~[na:1.7.0_45]
... 6 common frames omitted
2013-11-01 10:00:53.000+0000 DEBUG [API] Failed to start Neo Server on port [7474]
I found the problem with the jar files. Unfortunately, after solving the jar files problem, I had to reinstall neo4j for the service to work again