I have been trying to use use neo4j community in a container and am getting errors. I think this might more a docker usage issues rather than neo4j usage.
I have built a container image from https://github.com/neo4j/docker-neo4j-publish 2.3.9, 3.3.3, 3.3.4 and 3.3.5 (only differences being some new ports in later versions). I have even pulled a native 3.3.3 from dockerhub.com
mkdir /tmp/data
chmod 777 /tmp/data
docker run --detach=true --name=neo4j --publish=7474:7474 --publish=7687:7687 --publish=7473:7473 --volume=/tmp/data:/data neo4j:3.3.3
docker exec -it neo4j find / -name '*.log'
and although it seems to be working with
neo4j> CREATE (n);
0 rows available after 50 ms, consumed after another 0 ms
Added 1 nodes
neo4j> CREATE (m),(o);
0 rows available after 15 ms, consumed after another 0 ms
Added 2 nodes
neo4j> MATCH (n) RETURN n;
+----+
| n |
+----+
| () |
| () |
| () |
+----+
3 rows available after 21 ms, consumed after another 8 ms
I actually get errors like this:
docker exec -it neo4j neo4j status
Neo4j is not running
Now this one looks like I am mistakenly trying to start another instance of Neo4j over a running instance:
docker exec -it neo4j neo4j console
Active database: graph.db
Directories in use:
home: /var/lib/neo4j
config: /var/lib/neo4j/conf
logs: /var/lib/neo4j/logs
plugins: /var/lib/neo4j/plugins
import: /var/lib/neo4j/import
data: /var/lib/neo4j/data
certificates: /var/lib/neo4j/certificates
run: /var/lib/neo4j/run
Starting Neo4j.
2018-04-15 06:30:13.119+0000 WARN Unknown config option: causal_clustering.discovery_listen_address
2018-04-15 06:30:13.123+0000 WARN Unknown config option: causal_clustering.raft_advertised_address
2018-04-15 06:30:13.123+0000 WARN Unknown config option: causal_clustering.raft_listen_address
2018-04-15 06:30:13.123+0000 WARN Unknown config option: ha.host.coordination
2018-04-15 06:30:13.124+0000 WARN Unknown config option: causal_clustering.transaction_advertised_address
2018-04-15 06:30:13.124+0000 WARN Unknown config option: causal_clustering.discovery_advertised_address
2018-04-15 06:30:13.124+0000 WARN Unknown config option: ha.host.data
2018-04-15 06:30:13.124+0000 WARN Unknown config option: causal_clustering.transaction_listen_address
2018-04-15 06:30:13.146+0000 INFO ======== Neo4j 3.3.3 ========
2018-04-15 06:30:13.186+0000 INFO Starting...
2018-04-15 06:30:13.997+0000 INFO Bolt enabled on 0.0.0.0:7687.
2018-04-15 06:30:14.094+0000 ERROR Failed to start Neo4j: Starting Neo4j failed: Component 'org.neo4j.server.database.LifecycleManagingDatabase#44a59da3' was successfully initialized, but failed to start. Please see the attached cause exception "Store and its lock file has been locked by another process: /var/lib/neo4j/data/databases/graph.db/store_lock. Please ensure no other process is using this database, and that the directory is writable (required even for read-only access)". Starting Neo4j failed: Component 'org.neo4j.server.database.LifecycleManagingDatabase#44a59da3' was successfully initialized, but failed to start. Please see the attached cause exception "Store and its lock file has been locked by another process: /var/lib/neo4j/data/databases/graph.db/store_lock. Please ensure no other process is using this database, and that the directory is writable (required even for read-only access)".
org.neo4j.server.ServerStartupException: Starting Neo4j failed: Component 'org.neo4j.server.database.LifecycleManagingDatabase#44a59da3' was successfully initialized, but failed to start. Please see the attached cause exception "Store and its lock file has been locked by another process: /var/lib/neo4j/data/databases/graph.db/store_lock. Please ensure no other process is using this database, and that the directory is writable (required even for read-only access)".
Does anybody have experience with Neo4j's docker implementation? Is it a single threaded issue meaning I need to call the CLI tools differently from the container?
The neo4j status command only works if you've started neo4j with neo4j start. Start creates a neo4j.pid file that status uses to see if neo4j is running. Starting under docker uses the console option instead of the start option. This does not create the PID file, so the status doesn't work. But that hardly matters, because neo4j is just about the only process running; if neo4j dies, the container will exit. If docker ps -a says that the container is up, then neo4j is up.
Related
I'm running apache-jena-fuseki-3.13-1 and just found tdb2.tdbcompact from its bin-directory. I should run tdb2.tdbcompact nightly to prevent my jena-fuseki from running out of disk space, but now I get error message( Failed to get a lock: file) when running it:
miettinj#ramen:~/jena> ./apache-jena-3.13.1/bin/tdb2.tdbcompact --loc=./apache-jena-fuseki- 3.13.1/run/databases/test_TDB2
org.apache.jena.dboe.DBOpEnvException: Failed to get a lock: file='/srv/work/miettinj/jena/apache-jena-fuseki-3.13.1/run/databases/test_TDB2/tdb.lock': held by process 6136
ps -x|grep 6136
6136 ? Sl 30:48 /usr/lib64/jvm/java/bin/java -Xmx1200M -cp /srv/work/miettinj/jena/apache-jena-fuseki-3.13.1/fuseki-server.jar
"held by process 6136"
Another process is using the database. Compaction has to happen from the process using the database.
Apache Jena Fuseki Jena 3.17.0 added a function endpoint so that the administrator can ask for compaction on a running Fuseki server.
Having a lot of trouble installing mysql 5.7 on Mac Mojave, (ran 'brew install mysql#5.7')
on initial install, got msg saying postinstall was not completed successfully (please see msg below).
So, after I delete everything in the directory /usr/local/var/mysql (which mysql says is not empty), I STILL get same message when re-running postinstall command ... (which is quite annoying seems MySQL is populating the data dir then complaining it is not empty?!)
[08:02:48][~/tmp]#brew postinstall mysql#5.7
==> Postinstalling mysql#5.7
==> /usr/local/Cellar/mysql#5.7/5.7.28/bin/mysqld --initialize-insecure --user=gert --basedir=/usr/local/Cellar/mysql#5.7/5.7.28 --datadir=/usr/local/var/my Last 15 lines from /Users/gert/Library/Logs/Homebrew/mysql#5.7/post_install.01.mysqld: 2019-12-09 08:03:39 +0200
/usr/local/Cellar/mysql#5.7/5.7.28/bin/mysqld
--initialize-insecure
--user=gert
--basedir=/usr/local/Cellar/mysql#5.7/5.7.28
--datadir=/usr/local/var/mysql
--tmpdir=/tmp
2019-12-09T06:03:39.151987Z 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use
--explicit_defaults_for_timestamp server option (see documentation for more details). 2019-12-09T06:03:39.154025Z 0
[ERROR] --initialize specified but the data directory has files in it. Aborting. 2019-12-09T06:03:39.154074Z 0 [ERROR] Aborting
Trying to start mysql as root gives error:
[08:04:41][~/tmp]#sudo /usr/local/opt/mysql#5.7/bin/mysql.server start
Password: Starting MySQL ..... ERROR! The server quit without updating
PID file (/var/run/mysqld/mysqld.pid).
Banging head against wall for days now trying to follow StackOverflow posts MySql server startup error 'The server quit without updating PID file ', none of which is working ...
My my.cnf:
[mysqld]
# Only allow connections from localhost
#bind-address = 127.0.0.1
#SO posts said to comment out the above ...
pid-file = /var/run/mysqld/mysqld.pid #Checked, this folder + file exists, with write permissions
Try using a data dir away from the mysql directory i.e if mysql is in /usr/local/mysql, use the data dir as /var/data.
root#photon [ /var ]# /usr/local/mysql/bin/mysqld --initialize-insecure --user=mysql --datadir=/var/data
2020-02-22T21:42:27.121230Z 0 [System] [MY-013169] [Server] /usr/local/mysql/bin/mysqld (mysqld 8.0.19) initializing of server in progress as process 820
2020-02-22T21:42:35.018238Z 5 [Warning] [MY-010453] [Server] root#localhost is created with an empty password ! Please consider switching off the --initialize-insecure option.
Version : spring-cloud-dataflow-server-yarn-1.2.2.RELEASE
Issue : All OOTB / Custom Task Apps seem to be NOT working with Yarn Deployer (I Specifically tested with timestamp-task-1.3.0.RELEASE and a hello world Custom Task built per the reference doc).
We have a Yarn cluster where all the streams that we have deployed are running fine which rules out any issue with hadoop/yarn cluster. The moment we try to deploy a task, the task exits with code 0 with below message logged in Yarn Container/AppMaster stdout
2018-09-19 18:04:20.782 DEBUG 22625 --- [ask-scheduler-2] o.s.yarn.am.allocate.AbstractAllocator : completed container: container_1536919363436_0805_01_000002 with status=ContainerStatus: [ContainerId: container_1536919363436_0805_01_000002, State: COMPLETE, Diagnostics: Exception from container-launch.
Container id: container_1536919363436_0805_01_000002
Exit code: 1
Stack trace: ExitCodeException exitCode=1:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:545)
at org.apache.hadoop.util.Shell.run(Shell.java:456)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:722)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Container exited with a non-zero exit code 1
, ExitStatus: 1, ]
Full AppMaster Log can be found here and corresponding servers.yml can be found here
Any help is Appreciated.
I am answering my own question -- our yarn server had log aggregation enabled and hence container logs weren't displayed immediately and I had to grep through the aggregated logs to find out why custom tasks weren't launching. Once we (temporarily) disabled log aggregation in yarn, custom task's Container.stdout and Container.stderror were visible under log directory configured in yarn-site.xml
I'm trying to use graylog2 to collect logs from docker containers. Docs says that only UDP GELF input is supported for this purpose.
I'm using docker-compose to run the graylog server. See gist for all files used: https://gist.github.com/olegabr/7f5190c453bb63c71dabf151d2373c2f.
And I'm using this command to test it:
sendip -p ipv4 -is 127.0.0.1 -p udp -us 5070 -ud 12201 -d '{"version": "1.1","host":"example.org","short_message":"Short message","full_message":"Backtrace here\n\nmore stuff","level":1,"_user_id":9001,"_some_info":"foo","_some_env_var":"bar"}' -v 127.0.0.1
Server receives this message, but it can not process it. I see following in the graylog2 logs:
2016-12-09 11:53:20,125 WARN : org.graylog2.bindings.providers.DefaultStreamProvider - Unable to load default stream, tried 1 times, retrying every 500ms. Processing is blocked until this succeeds.
2016-12-09 11:53:25,129 WARN : org.graylog2.bindings.providers.DefaultStreamProvider - Unable to load default stream, tried 11 times, retrying every 500ms. Processing is blocked until this succeeds.
e.t.c. many many similar lines.
The API call curl http://admin:123456#127.0.0.1:9000/api/count/total returns
{"events":0}
In the server logs I see that the default stream was initialized:
mongo_1 | 2016-12-09T11:51:12.522+0000 I INDEX [conn3] build index on: graylog.pipeline_processor_pipelines_streams properties: { v: 2, unique: true, key: { stream_id: 1 }, name: "stream_id_1", ns: "graylog.pipeline_processor_pipelines_streams" }
graylog_1 | 2016-12-09 11:51:13,408 INFO : org.graylog2.periodical.Periodicals - Starting [org.graylog.plugins.pipelineprocessor.periodical.LegacyDefaultStreamMigration] periodical, running forever.
graylog_1 | 2016-12-09 11:51:13,424 INFO : org.graylog.plugins.pipelineprocessor.periodical.LegacyDefaultStreamMigration - Legacy default stream has no connections, no migration needed.
graylog_1 | 2016-12-09 11:51:13,487 INFO : org.graylog2.migrations.V20160929120500_CreateDefaultStreamMigration - Successfully created default stream: All messages
graylog_1 | 2016-12-09 11:51:13,653 INFO : org.graylog2.migrations.V20161125142400_EmailAlarmCallbackMigration - No streams needed to be migrated.
graylog_1 | 2016-12-09 11:51:13,662 INFO : org.graylog2.migrations.V20161125161400_AlertReceiversMigration - No streams needed to be migrated.
graylog_1 | 2016-12-09 11:51:13,672 INFO : org.graylog2.migrations.V20161130141500_DefaultStreamRecalcIndexRanges - Cluster not connected yet, delaying migration until it is reachable.
So, why it can not be loaded when the message arrives? Why it is needed in the first place?
I've tried to find similar reports in web but with no success.
This has nothing to do with the UDP input per se.
Graylog 2.2.0-beta.1 is broken and shouldn't be used. Please downgrade to Graylog 2.1.2 (the latest stable version) or wait for Graylog 2.2.0-beta.2.
See https://groups.google.com/forum/#!searchin/graylog2/docker|sort:date/graylog2/gCycC3_K3vU/EL-Lz_uNDQAJ for a related post on the Graylog mailing list.
same trouble
just setup graylog and configure input gelf udp 12209 port
then test it twice by:
docker run --log-driver=gelf --log-opt gelf-address=udp://127.0.0.1:12209 busybox echo Hello Graylog
in UI i saw:
2 messages in process buffe
2 unprocessed messages are currently in the journal, in 1 segments.
0 messages have been appended in the last second, 0 messages have been read in the last second.
and still getting:
2016-12-09 12:41:23,715 INFO : org.graylog2.inputs.InputStateListener - Input [GELF UDP/584aa67308813b00010d009e] is now RUNNING
2016-12-09 12:41:43,666 WARN : org.graylog2.bindings.providers.DefaultStreamProvider - Unable to load default stream, tried 1 times, retrying every 500ms. Processing is blocked until this succeeds.
anyone have found solution ?
We are using DataStax Enterprise version 5.0.1 and are facing issue while creating the graph from the Gremlin Console.
Here are the details of the error that I am getting:
adminuser#dc0vm1:~$ dse gremlin-console
\,,,/
(o o)
-----oOOo-(3)-oOOo-----
plugin activated: tinkerpop.tinkergraph
plugin activated: tinkerpop.server
plugin activated: tinkerpop.utilities
gremlin> :remote connect tinkerpop.server conf/remote.yaml
==>Configured 13.82.30.252/13.82.30.252:8182
gremlin> :> 1+1
Host did not respond in a timely fashion - check the server status and submit again.
gremlin> :> system.graph('food').create()
Host did not respond in a timely fashion - check the server status and submit again.
I changed the Remote.yaml file settings from [locahost] to
hosts: [13.82.30.252].
I ran the nodetool command to check if the server is running properly:
adminuser#dc0vm1:~$ nodetool status
Datacenter: dc0
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns Host ID Rack
UN 13.82.25.134 168.92 KB 64 ? d7a98eed-9b15-42ee-bc5c-f406e98fd6fc FD2
UN 13.82.25.152 189.17 KB 64 ? 7ffa11ea-8607-4bdb-903b-2ee3baeacae8 FD0
UN 13.82.30.252 150.6 KB 64 ? a57f6cd8-5466-480e-b919-329c36fbfd28 FD1
The cassandra.yaml has the following entries related to the host:
broadcast_rpc_address: 13.82.30.252
rpc_address: 0.0.0.0
Could you please let me know what configuration I am missing here?
I figured out that by default the DSE Graph service is not enabled so you need to edit the file "dse" to enable it -
sudo vim /etc/default/dse
Make sure that the following parameter is set to 1 –
# Enable the DSE Graph service on this node
GRAPH_ENABLED=1
Restart the DSE service -
sudo service dse stop
sudo service dse start
Now Gremlin Console is able to connect and create the Graph.