I have a Neo4J Enterprise server 4.0 running official docker image "neo4j:4.0-enterprise". The server has been up and running for more than a month with 3 configured databases. The server is being executed as:
docker run --rm \
-v /home/neo4j:/data -v /var/log/neo4j:/logs \
-e NEO4J_ACCEPT_LICENSE_AGREEMENT=yes \
-e NEO4J_dbms_mode=SINGLE \
-e NEO4J_AUTH=neo4j/secure-password \
-e NEO4J_dbms_memory_heap_initial__size=5100m \
-e NEO4J_dbms_memory_heap_max__size=5100m \
-e NEO4J_dbms_memory_pagecache_size=6900m \
-p 7474:7474 -p 7687:7687 \
--name neo4j \
neo4j:4.0-enterprise
Executing a very simple CYPHER in the neo4j browser console was failing in one of the databases (sandbox) with the following error message:
MATCH (n:IdentityProviderType) SET n.protocol = n.value
Newly created token should be unique.
The database contains only 2 nodes with the label IdentityProviderType.
After a server restart docker restart neo4j the database failed to get online showing the following message after executing :sysinfo:
An error occurred! Unable to start database with name 'sandbox'
Stoping the Neo4J server and executing:
docker run --rm -it \
-v /home/neo4j:/data \
-e NEO4J_ACCEPT_LICENSE_AGREEMENT=yes \
neo4j:4.0-enterprise \
neo4j-admin check-consistency --database=sandbox
2020-01-29 15:32:57.724+0000 INFO [o.n.k.i.s.f.RecordFormatSelector] Selected RecordFormat:StandardV4_0[SF4.0.0] record format from store /data/databases/sandbox
2020-01-29 15:32:57.726+0000 INFO [o.n.k.i.s.f.RecordFormatSelector] Format not configured for store /data/databases/sandbox. Selected format from the store files: RecordFormat:StandardV4_0[SF4.0.0]
org.neo4j.token.api.NonUniqueTokenException: The PropertyKey NamedToken[name:protocol, id:86, internal:false] is not unique, it existed as null.
at org.neo4j.token.TokenRegistry.checkNameUniqueness(TokenRegistry.java:199)
at org.neo4j.token.TokenRegistry.insertAllChecked(TokenRegistry.java:174)
at org.neo4j.token.TokenRegistry.setInitialTokens(TokenRegistry.java:64)
at org.neo4j.token.AbstractTokenHolderBase.setInitialTokens(AbstractTokenHolderBase.java:46)
at org.neo4j.token.TokenHolders.setInitialTokens(TokenHolders.java:59)
at org.neo4j.consistency.ConsistencyCheckService.runFullConsistencyCheck(ConsistencyCheckService.java:230)
at org.neo4j.consistency.ConsistencyCheckService.runFullConsistencyCheck(ConsistencyCheckService.java:158)
at org.neo4j.consistency.CheckConsistencyCommand.execute(CheckConsistencyCommand.java:137)
at org.neo4j.cli.AbstractCommand.call(AbstractCommand.java:60)
at org.neo4j.cli.AbstractCommand.call(AbstractCommand.java:30)
at picocli.CommandLine.executeUserObject(CommandLine.java:1743)
at picocli.CommandLine.access$900(CommandLine.java:145)
at picocli.CommandLine$RunLast.handle(CommandLine.java:2101)
at picocli.CommandLine$RunLast.handle(CommandLine.java:2068)
at picocli.CommandLine$AbstractParseResultHandler.execute(CommandLine.java:1935)
at picocli.CommandLine.execute(CommandLine.java:1864)
at org.neo4j.cli.AdminTool.execute(AdminTool.java:78)
at org.neo4j.cli.AdminTool.main(AdminTool.java:59)
How can I recover from a database that fails to start?
I cannot find anything in the documentation that helps me troubleshoot this type of problems.
Thanks for your help,
Rogelio
Related
I was following this https://www.elastic.co/guide/en/beats/filebeat/current/running-on-docker.html#_run_the_filebeat_setup
docker run \
docker.elastic.co/beats/filebeat:8.0.0 \
setup -E setup.kibana.host=kibana:port \
-E output.elasticsearch.hosts=["https://testelk.es.us-east4.gcp.elastic-cloud.com:9243"] \
cloud -E cloud.id=cloudid \
-E cloud.auth=elastic:pass
I get the following error on my macOS when I run it on my terminal. Is there a way to fix it?
zsh: no matches found: output.elasticsearch.hosts=[https://testelk.es.us-east4.gcp.elastic-cloud.com:9243]
As written in the documentation, if you are using Elastic Cloud, you need to remove the output.elasticsearch.hosts option and specify only the cloud.id and cloud.auth.
I am trying run kafDrop using docker image. I am able to connect to non SSL broker by running command
docker run -d --rm -p 9000:9000 --network=host -e KAFKA_BROKERCONNECT=KafkaServer:9092 obsidiandynamics/kafdrop
But when I tried to connect to same Broker with enabling SSL using command,
docker run -d --rm -p 9000:9000 --network=host -e KAFKA_BROKERCONNECT=KafkaServer:9092 -e KAFKA_PROPERTIES=$(cat kafka.properties | base64) -e KAFKA_TRUSTSTORE=$(cat myTrustStore | base64) -e KAFKA_KEYSTORE=$(cat myKeyStore | base64) obsidiandynamics/kafdrop
getting below error
/usr/bin/docker-current: Error parsing reference: "bmZpZy9wb21LZXlTdG9yZQpzc2wua2V5c3RvcmUucGFzc3dvcmQ9Y2hhbmdlaXQKc3NsLmtleS5w" is not a valid repository/tag: repository name must be lowercase.
and if I dont use base64 in command then I am getting error
/usr/bin/docker-current: Error parsing reference: "ssl.keystore.location=/opt/KafdropConfig/myKeyStore" is not a valid repository/tag: invalid reference format.
I have copied kafka.properties, myTrustStore and myKeyStore on the my machine where docker is running
Can you please help me to identify the mistake I am doing here?
not a valid repository/tag: repository name must be lowercase
This is a docker run error, which means your command was not escaped properly
Try adding quotes around the bash executions
docker run -d --rm -p 9000:9000 \
-e KAFKA_BROKERCONNECT=KafkaServer:9092 \
-e KAFKA_PROPERTIES="$(cat kafka.properties | base64)" \
-e KAFKA_TRUSTSTORE="$(cat myTrustStore | base64)" \
-e KAFKA_KEYSTORE="$(cat myKeyStore | base64)" \
obsidiandynamics/kafdrop
and you can use base64 or just volume mount the files.
https://github.com/obsidiandynamics/kafdrop#connecting-to-a-secure-broker
Note: I removed --network=host because if you really need that, then your Kafka networking needs adjusted to allow external clients
I am configuring a 3 node Kafka Cluster ( 3 brokers and 3 zookeepers with SSL enabled) using docker. Now I need to set up a schema registry. If I just need to use one schema registry is it possible? If Yes how does my SSL trust store and key store configs looks like while running?
I did refer to confluents documentation, where they discuss about Kafka based leader election and zookeeper based leader election, but not clear.
This is my faulty docker run command.
docker run -d \
--net=host \
--name=schema-registry \
-e
SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL\
=localhost:22181,localhost:32181,localhost:42181 \
-e SCHEMA_REGISTRY_HOST_NAME=localhost \
-e SCHEMA_REGISTRY_DEBUG=true \
-e SCHEMA_REGISTRY_KAFKASTORE_SECURITY_PROTOCOL=SSL
-e SCHEMA_REGISTRY_KAFKASTORE_SSL_TRUSTSTORE_LOCATION\
=kafka.broker1.truststore.jks \
-e
SCHEMA_REGISTRY_KAFKASTORE_SSL_TRUSTSTORE_PASSWORD\
=broker1_truststore_creds \
-e SCHEMA_REGISTRY_KAFKASTORE_SSL_KEYSTORE_LOCATION\
=kafka.broker1.keystore.jks \
-e SCHEMA_REGISTRY_KAFKASTORE_SSL_KEYSTORE_PASSWORD\
=broker1_keystore_creds \
-e SCHEMA_REGISTRY_KAFKASTORE_SSL_KEY_PASSWORD=broker1_sslkey_creds \
-v ${KAFKA_SSL_SECRETS_DIR}:/etc/kafka/secrets \
confluentinc/cp-schema-registry:5.0.1
I am sure my understanding of how schema registry works with a clustered setup is not correct.
I'm running the following command to launch a InfluxDB container. This should create a new databse with the name defaultdb.
docker run -p 8086:8086 \
-e INFLUXDB_DB=defaultdb -e INFLUXDB_ADMIN_ENABLED=true \
-e INFLUXDB_ADMIN_USER=admin -e INFLUXDB_ADMIN_PASSWORD=adminpass \
-e INFLUXDB_USER=user -e INFLUXDB_USER_PASSWORD=userpass \
-v influxdb:/var/lib/influxdb \
influxdb:latest
But it doesnt create the default databse defaultdb. It creates the databse db0 instead of defaultdb. What I'm doing wrong?
https://hub.docker.com/_/influxdb/
Thanks in advance.
The problem is probably comming from the volume.
-v influxdb:/var/lib/influxdb
In particular, if you have previously created a database using the same command but without specifying the INFLUXDB_DB=defaultdb, this old database is overriding the container data via the old volume.
To solve the issue, remove the old volume and rerun the command:
docker volume rm influxdb
The issue was due to the INFLUXDB_ADMIN_ENABLED=true line.
The documentation states:
The administrator interface is deprecated as of 1.1.0 and will be
removed in 1.3.0.
I was using the latest version which is (currently) the 1.4 so it seems that there was a problem with that deprecated INFLUXDB_ADMIN_ENABLED variable.
Removing that line, everything worked perfectly.
docker run -p 8086:8086 \
-e INFLUXDB_DB=defaultdb \
-e INFLUXDB_ADMIN_USER=admin \
-e INFLUXDB_ADMIN_PASSWORD=adminpass \
-e INFLUXDB_USER=user \
-e INFLUXDB_USER_PASSWORD=userpass \
-v influxdb:/var/lib/influxdb \
influxdb:latest
I have a docker cluster behind a VPN. I have downloaded TFS agent container, and want to connect to our TFS but it cannot connect with an alarm of:
Determining matching VSTS agent...
Downloading and installing VSTS agent...
curl: (35) gnutls_handshake() failed: Error in the pull function.
gzip: stdin: unexpected end of file
tar: Child returned status 1
tar: Error is not recoverable: exiting now
It can ping google.But cannot ping public TFS. I would consider this as a network issue but nginx container was pulled and started with success.
docker run \
-e VSTS_ACCOUNT= xxx \
-e TFS_HOST= yyy \
-e VSTS_TOKEN= zzz \
-it microsoft/vsts-agent
also tried this:
docker run \
-e VSTS_ACCOUNT= xxx \
-e VSTS_AGENT='$(hostname)-agent'\
-e VSTS_TOKEN= yyy \
-e TFS_URL= zzz \
-e VSTS_POOL= eee \
-e VSTS_WORK='/var/vsts/$VSTS_AGENT' \
-v /var/vsts:/var/vsts \
-it microsoft/vsts-agent:ubuntu-14.04
Although it is behind VPN, I can access the repo from browser btw.
It seems docker shows SSL Handshake issue even if you have network issue. But it showing that connection through curl was OK. This issue was solved by adding IP to whitelist.