Separate transaction logs in Neo4j Enterprise - neo4j

I'am trying to change logical logs out of the *.db folder to put this in another disk volume. However I don't see any option in the neo4j config files that will allow you to do this. It's possible to do this configuration?
My neo4j version is 3.2.1.
Thanks

No, it is - at this time - not possible to move the transaction logs to some other place. Note that while the term logs is technically correct, these files are essential to the integrity of the database (unlike a regular log it would be very unwise to delete them) and it is therefore logical that they live together with the actual datafiles.
Hope this helps,
Tom

see file
conf/neo4j.conf
and line
#dbms.directories.logs=logs

Related

How to display log for Rails application?

I have a rails app and I would like to display the log in the app itself. This will allow administrators to see what changes were recently made without entering the console and using the file with the logs. All logs will be displayed in the application administration. How is it possible to implement and what kind of gems do I need to use?
You don't need a Gem.
Add a controller, read the logfiles and render the output in HTML.
Probably need to limit the number of lines you read. Also there might be different log files to chose from.
I don't think this is a good idea though. Log files are for finding errors and you should not need them in your day to day work, unless you manage ther server.
Also they might contain sensitive data (CC Numbers, Pwds, ...) and it might get complicated when you use multiple servers with local disks.
Probably better to look at dedicated tools for this and handle logs outside of your application.
Assuming that you have git associated with your application or git bash installed in your system.
For displaying log information for the development mode, migrate to your application folder in your console/terminal and type tail -f log/development.log

Neo4j import tool (Bulk) - Why is the restart required?

This question came up on our team a few times.I don't know the real answer.
Can someone point out in the right direction to better understand why restart is needed after bulk importing CSV files?
Or is there an option to tell neo4j to silently point to new DB folder and no restart?
Cheers!
The import utility creates a previously non-existent database. Offline (that's the whole point), so Neo4j can not be running on that database and the resulting folder can not exist beforehand. So even if you use the default folder name (graph.db) you have to start Neo4j after the import. Or change neo4j.conf to point to the newly created folder and then start Neo4j. Either way you have to start/restart Neo4j ... changes to the configuration are not picked up by the software.
Hope this helps,
Regards,
Tom

neo4j cluster: No such log version

I try use neo4j HA cluster (neo4j 2.0.1), but I got error "No such log version: ..." after database copied from master.
I deleted all *.log files, but it was not help for me.
Can you help me with this problem?
TIA.
"Log" in Neo4j refers to the write-ahead-log that the database uses to ensure durability and consistency. It is stored in files named something like nioneo_logical_log.vX, where X is a number. You should never delete these files manually, even if the database is turned off, this may lead to data loss. If you wish to limit the amount of disk space used by neo for logs, use this configuration:
keep_logical_logs=128M
The error you are seeing means that the database copied cannot perform an incremental update to catch up with the master, because the log files have been deleted. This may happen normally, if the copy is very old, or if you have keep_logical_logs set to false.
What you need to do to resolve this is to set keep_logical_logs to some value that makes sense to you, perhaps "keep_logical_logs=2 days" to keep logs up to two days old around. Once you've done that, you need to do a new full store copy on your slave, which can be triggered by shutting the slave off, deleting the database folder (data/graph.db), and starting the slave back up again.
Newer versions of Neo will do this automatically.

Rabbitmq erlang client build failed due to file paths problems?

I have been able to build rabbitmq server on ubuntu linux. It came already prepackaged and on making, it is able to start as a service. When i got the client source, i failed to make because it appeared like it needed a folder called ./deps/rabbitmq-server. Analysing the code, i find that the author of the client was accessing the same header files as are found in the server, using include_lib("path to rabbit.hrl e.t.c") in his header file called "amqp_client.hrl". I then decided to add rabbitmq_server in the lib dir of erlang so as its paths are automatically added on start up of the vm. But still this didnot help. There is also another folder which the client references called "rabbit_common" for an include folder he assumes would contain all the .hrl files there. Please assist me in building both the client and server on my ubuntu server, for testing.
Also, if anyone has used RabbitMQ server for IMs, please provide some benchmarks and/or your findings on its throughput, speed and number of users. How can it be compared to ejabberd?. How can one create AJAX/Jquery/Javascript clients for Web functionality?
thanks
I hope you had made some progress as far as RabbitMQ and ejabberd are concerned.
Below is a link to an interesting discussion that might be of help.
http://old.nabble.com/AMPQ-vs-XMPP-and-RabbitMQ-vs-ejabberd-td17587109.html

Please explain about AbInitio recovery file(.rec)?When should we roll back the file?

Please tell the concept of AbInitio recovery file.
When the Abinitio graph fails in execution which cases should we rollback the recovery file and in which cases we shouldnt rollback the recovery file.
Please provide links for any AbInitio materials.
Thanks.
The only time you would want to use the recovery file (.rec) is when you are executing a multi-phase graph and at least one phase has completed. You can then use the .rec file to restart the graph from the most recently completed phase.
However, you should only use the rec file if something external to the graph caused failure. Examples of this are: network going down, shared disk becoming unavailable or something similar. If you have a bug in your code and that cause failure, then you'll want to use m_rollback to remove both the rec file and any intermediate files ab initio created and start over.
Ab Initio does not publish their manuals, you will have to contact Ab Initio directly for materials.
m_rollback with -d option will delete the job, its temporary files and the recovery file after the rollback is successful.

Resources