Neo4j 3.5.11 failure to launch in Docker - docker

I'm attempting to upgrade Neo4j 3.5.5 to version 3.5.11 using the new APOC environment variable.
The upgrade works on my dev box running Ubuntu 18.04 via:
docker run -d -p 7474:7474 -p 7687:7687 -v /var/lib/neo4j/data:/data -v /var/lib/neo4j/plugins:/var/lib/neo4j/plugins -v /var/lib/neo4j/logs:/var/log/neo4j --ulimit=nofile=40000:40000 --env=NEO4J_AUTH=none -e NEO4J_dbms_allow__upgrade=true -e NEO4J_dbms_security_procedures_unrestricted=apoc.\\\* --env 'NEO4JLABS_PLUGINS=["apoc", "graph-algorithms"]' neo4j:3.5.11
However it fails on the production platform, an EC2 instance running as a worker node in a Docker swarm, launched in a container built from node:10.16.0-alpine. Earlier versions up to and including 3.5.5 will install without error and have run successfully in this configuration for several years. The graph.db is identical on dev and production platforms. The compose file is as follows, where the only mods are the neo4j version number and the added NEO4JLABS_PLUGINS variable:
version: "3.2"
services:
neo4j:
image: neo4j:3.5.11
environment:
- HOME=/root
- NEO4J_AUTH=none
- NEO4J_dbms_allow__upgrade=true
- NEO4J_dbms_memory_pagecache_size=100M
- NEO4J_dbms_memory_heap_initial__size=2G
- NEO4J_dbms_memory_heap_max__size=2G
- NEO4J_dbms_security_procedures_unrestricted=apoc.\\\*
- NEO4JLABS_PLUGINS=["apoc", "graph-algorithms"]
ports:
- "7474:7474"
- "7687:7687"
volumes:
- gc01_data:/data
- gc01_neo4j_logs:/logs
networks:
- backend
ulimits:
nofile:
soft: 65535
hard: 65535
deploy:
replicas: 1
placement:
constraints: [engine.labels.node_task == neo4j]
restart_policy:
condition: on-failure
secrets:
gcrt_cert:
external: true
gcrt_key:
external: true
networks:
frontend:
backend:
volumes:
nmod_core:
gc01_neo4j_logs:
gc01_data:
external: true
The error log (from CloudWatch) looks like this:
17:33:12
Fetching versions.json for Plugin 'apoc' from https://github.com/neo4j-contrib/neo4j-apoc-procedures/raw/master/versions.json
Installing Plugin 'apoc' from https://github.com/neo4j-contrib/neo4j-apoc-procedures/releases/download/3.5.0.5/apoc-3.5.0.5-all.jar to /var/lib/neo4j/plugins/apoc.jar
Fetching versions.json for Plugin 'graph-algorithms' from https://github.com/neo4j-contrib/neo4j-graph-algorithms/raw/master/versions.json
Installing Plugin 'graph-algorithms' from https://s3-eu-west-1.amazonaws.com/com.neo4j.graphalgorithms.dist/neo4j-graph-algorithms-3.5.11.0-standalone.jar to /var/lib/neo4j/plugins/graph-algorithms.jar
Active database: graph.db
Directories in use:
home: /var/lib/neo4j
config: /var/lib/neo4j/conf
logs: /logs
plugins: /var/lib/neo4j/plugins
import: /var/lib/neo4j/import
data: /var/lib/neo4j/data
certificates: /var/lib/neo4j/certificates
run: /var/lib/neo4j/run
Starting Neo4j.
2019-10-18 17:33:25.050+0000 INFO ======== Neo4j 3.5.11 ========
2019-10-18 17:33:25.109+0000 INFO Starting...
2019-10-18 17:33:25.576+0000 ERROR Failed to start Neo4j: Starting Neo4j failed: Component 'org.neo4j.server.database.LifecycleManagingDatabase#192d43ce' was successfully initialized, but failed to start. Please see the attached cause exception "/logs/debug.log (Permission denied)". Starting Neo4j failed: Component 'org.neo4j.server.database.LifecycleManagingDatabase#192d43ce' was successfully ini
org.neo4j.server.ServerStartupException: Starting Neo4j failed: Component 'org.neo4j.server.database.LifecycleManagingDatabase#192d43ce' was successfully initialized, but failed to start. Please see the attached cause exception "/logs/debug.log (Permission denied)".
at org.neo4j.server.exception.ServerStartupErrors.translateToServerStartupError(ServerStartupErrors.java:45)
at org.neo4j.server.AbstractNeoServer.start(AbstractNeoServer.java:187)
at org.neo4j.server.ServerBootstrapper.start(ServerBootstrapper.java:124)
at org.neo4j.server.ServerBootstrapper.start(ServerBootstrapper.java:91)
at org.neo4j.server.CommunityEntryPoint.main(CommunityEntryPoint.java:32)
Caused by: org.neo4j.kernel.lifecycle.LifecycleException: Component 'org.neo4j.server.database.LifecycleManagingDatabase#192d43ce' was successfully initialized, but failed to start. Please see the attached cause exception "/logs/debug.log (Permission denied)".
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.start(LifeSupport.java:473)
at org.neo4j.kernel.lifecycle.LifeSupport.start(LifeSupport.java:111)
at org.neo4j.server.AbstractNeoServer.start(AbstractNeoServer.java:180)
... 3 more
Caused by: java.lang.RuntimeException: java.io.FileNotFoundException: /logs/debug.log (Permission denied)
at org.neo4j.graphdb.factory.module.PlatformModule.createLogService(PlatformModule.java:324)
at org.neo4j.graphdb.factory.module.PlatformModule.<init>(PlatformModule.java:181)
at org.neo4j.graphdb.facade.GraphDatabaseFacadeFactory.createPlatform(GraphDatabaseFacadeFactory.java:263)
at org.neo4j.graphdb.facade.GraphDatabaseFacadeFactory.initFacade(GraphDatabaseFacadeFactory.java:180)
at org.neo4j.graphdb.facade.GraphDatabaseFacadeFactory.newFacade(GraphDatabaseFacadeFactory.java:148)
at org.neo4j.server.database.CommunityGraphFactory.newGraphDatabase(CommunityGraphFactory.java:41)
at org.neo4j.server.database.LifecycleManagingDatabase.start(LifecycleManagingDatabase.java:90)
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.start(LifeSupport.java:452)
... 5 more
Caused by: java.io.FileNotFoundException: /logs/debug.log (Permission denied)
at java.io.FileOutputStream.open0(Native Method)
at java.io.FileOutputStream.open(FileOutputStream.java:270)
at java.io.FileOutputStream.<init>(FileOutputStream.java:213)
at org.neo4j.io.fs.DefaultFileSystemAbstraction.openAsOutputStream(DefaultFileSystemAbstraction.java:72)
at org.neo4j.io.file.Files.createOrOpenAsOutputStream(Files.java:51)
at org.neo4j.logging.RotatingFileOutputStreamSupplier.openOutputFile(RotatingFileOutputStreamSupplier.java:338)
at org.neo4j.logging.RotatingFileOutputStreamSupplier.<init>(RotatingFileOutputStreamSupplier.java:137)
at org.neo4j.logging.RotatingFileOutputStreamSupplier.<init>(RotatingFileOutputStreamSupplier.java:121)
at org.neo4j.logging.internal.StoreLogService.<init>(StoreLogService.java:181)
at org.neo4j.logging.internal.StoreLogService.<init>(StoreLogService.java:45)
at org.neo4j.logging.internal.StoreLogService$Builder.build(StoreLogService.java:125)
at org.neo4j.graphdb.factory.module.PlatformModule.createLogService(PlatformModule.java:320)
... 12 more
2019-10-18 17:33:25.594+0000 INFO Neo4j Server shutdown initiated by request
Anything stand out? I'm seeing "Permission Denied" errors I don't understand. Thanks!

Yesterday I had the same problem, same error message, as the OP in 2019. Pretty weird to realize the OP was me in a past life.
Anyway, I realized the "home" directory for current versions of neo4j is at /var/lib/neo4j. This is the default, or may be set explicitly via env var NEO4J_HOME. It's displayed in the neo4j logs, as above where it says Directories in Use: home: /var/lib/neo4j.
Back then I was mounting the volume containing the neo4j data in a compose file as above - gc01_data:/data. Now I'm using a Docker CLI run option as -v /var/lib/neo4j/data:/data. In both cases the result is that data is mounted inside the container at /data, when neo4j is looking at /var/lib/neo4j/data. The error reads as "Permission Denied" but the real problem is "file not found." The mount target for the data volume has to match the neo4j home, as in: -v /var/lib/neo4j/data:/var/lib/neo4j/data

Related

ERROR Disk error while locking directory /var/kafka-logs in 3.10 Kafka

I am using Kafka 3.1.0, Portainer 2.9.0 and docker 20.10.11 to build a 1 broker, 1 consumer and 1 producer cluster.
I am trying to map the log dirs via the docker-compose from the container to the host machine in order to persist the content of that directory (because if the container falls that information will be lost). I know it is recommended to have more than 1 broker, but since I am just testing this feature, I don't want to overcomplicate myself.
The problem I get is
ERROR Disk error while locking directory /var/kafka-logs (kafka.server.LogDirFailureChannel)
java.nio.file.AccessDeniedException: /var/kafka-logs/.lock
[2022-03-31 12:00:53,986] ERROR [KafkaServer id=1] Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
I have checked and the user that executes the broker has all permissions (since I created that directory with my Dockerfile).
RUN mkdir /var/kafka-logs \
&& chown -R kafka:kafka /var/kafka-logs \
&& chmod -R 777 /var/kafka-logs
I have seen that this problem was a thing in the 3.0 version and was fixed in the 3.1, and also that it only happened in Windows, so I don't know the source of this problem.
Edit: I have checked and even without the mapping it still prints that error. It must be a problem of changing the log.dirs property to a non /tmp directory, because if I leave the default configuration it works just fine.
By default I mean the following:
log.dirs=/tmp/kafka-logs
My docker-compose:
version: "3.8"
networks:
net:
external: true
services:
kafka-broker1:
image: registry.gitlab.com/repo/kafka:2.13_3.1.0_v0.1
volumes:
- /var/volumes/kafka/config/server1.properties:/opt/kafka/config/server.properties
networks:
- net
kafka-producer:
image: registry.gitlab.com/repo/kafka:2.13_3.1.0_v0.1
stdin_open: true
tty: true
networks:
- net
kafka-consumer:
image: registry.gitlab.com/repo/kafka:2.13_3.1.0_v0.1
stdin_open: true
tty: true
networks:
- net
The problem was that I have been creating a few docker images and the container with the same name and it didn't picked the newest image.
Once I erased the rest of images and the container picked the lastest it all worked just fine, so it was basically a problem of not having enough permissions to get the lock of that directory.

Molecule : Testing roles : Failed to get Dbus Connection Operation not permitted

I facing an issue on my Molecule Test. I have begin to study this tool 2 days ago for information.
on a Ubuntu VM running with Vagrant,I have create a role and initialze Molecule's folder and create a testinfra test file ( with the docker provider ).
The error is when my task's role are running, at the step of checking service running, it failed.
fatal: [instance]: FAILED! => {"changed": false, "msg": "Could not find the requested service httpd: "}
I was design to simply install 2 packages including httpd on a Centos Image.
When im loggin directly to the Molecule VM ( so through docker ), when i simply type systemctl the error message is
Failed to get D-Bus connection: Operation not permitted
As adviced Geerlingguy, i have specify volume mapped on cgroup folder
platforms:
- name: instance
#image: docker.io/pycontribs/centos:7
image: geerlingguy/docker-${MOLECULE_DISTRO:-centos7}-ansible:latest
volumes:
- /sys/fs/cgroup:/sys/fs/cgroup:ro
The error is not related to Testinfra but only the docker built image.
Could someone help me to understand why this error message ?
Is that because im on a VirtualBox ran by Vagrant ?
Thanks all for reading :-)
I have added that on my mocule.yml file config according molecule documentation ( https://molecule.readthedocs.io/en/latest/examples.html#docker ) :
platforms:
- name: instance
#image: docker.io/pycontribs/centos:7
image: geerlingguy/docker-centos7-ansible:latest
capabilities:
- SYS_ADMIN
command: /sbin/init
systemctl working fine now

I cannot use --package option on bitnami/spark docker container

I pulled docker image and executed below command to run image.
docker run -it bitnami/spark:latest /bin/bash
spark-shell --packages="org.elasticsearch:elasticsearch-spark-20_2.11:7.5.0"
and i got message like below
Ivy Default Cache set to: /opt/bitnami/spark/.ivy2/cache
The jars for the packages stored in: /opt/bitnami/spark/.ivy2/jars
:: loading settings :: url = jar:file:/opt/bitnami/spark/jars/ivy-2.4.0.jar!/org/apache/ivy/core/settings/ivysettings.xml
org.elasticsearch#elasticsearch-spark-20_2.11 added as a dependency
:: resolving dependencies :: org.apache.spark#spark-submit-parent-c785f3e6-7c78-469f-ab46-451f8be61a4c;1.0
confs: [default]
Exception in thread "main" java.io.FileNotFoundException: /opt/bitnami/spark/.ivy2/cache/resolved-org.apache.spark-spark-submit-parent-c785f3e6-7c78-469f-ab46-451f8be61a4c-1.0.xml (No such file or directory)
at java.io.FileOutputStream.open0(Native Method)
at java.io.FileOutputStream.open(FileOutputStream.java:270)
at java.io.FileOutputStream.<init>(FileOutputStream.java:213)
at java.io.FileOutputStream.<init>(FileOutputStream.java:162)
at org.apache.ivy.plugins.parser.xml.XmlModuleDescriptorWriter.write(XmlModuleDescriptorWriter.java:70)
at org.apache.ivy.plugins.parser.xml.XmlModuleDescriptorWriter.write(XmlModuleDescriptorWriter.java:62)
at org.apache.ivy.core.module.descriptor.DefaultModuleDescriptor.toIvyFile(DefaultModuleDescriptor.java:563)
at org.apache.ivy.core.cache.DefaultResolutionCacheManager.saveResolvedModuleDescriptor(DefaultResolutionCacheManager.java:176)
at org.apache.ivy.core.resolve.ResolveEngine.resolve(ResolveEngine.java:245)
at org.apache.ivy.Ivy.resolve(Ivy.java:523)
at org.apache.spark.deploy.SparkSubmitUtils$.resolveMavenCoordinates(SparkSubmit.scala:1300)
at org.apache.spark.deploy.DependencyUtils$.resolveMavenDependencies(DependencyUtils.scala:54)
at org.apache.spark.deploy.SparkSubmit.prepareSubmitEnvironment(SparkSubmit.scala:304)
at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:774)
at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:161)
at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:184)
at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86)
at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:920)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:929)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
I tried other package, but it is not working with all same error message.
Can you give some advice to avoid this error?
Found the solution to it
as given in https://github.com/bitnami/bitnami-docker-spark/issues/7
what we have to do is create a volume on host mapped to docker path
volumes:
- ./jars_dir:/opt/bitnami/spark/ivy:z
give this path as cache path like this
spark-shell --conf spark.jars.ivy=/opt/bitnami/spark/ivy --conf
spark.cassandra.connection.host=127.0.0.1 --packages
com.datastax.spark:spark-cassandra-connector_2.12:3.0.0-beta --conf
spark.sql.extensions=com.datastax.spark.connector.CassandraSparkExtensions
All happened because /opt/bitnami/spark is not writable and we have to mount a volume to bypass that.
The error "java.io.FileNotFoundException: /opt/bitnami/spark/.ivy2/" occured because the location /opt/bitnami/spark/ is not writable. so in order to resolve this issue do modify the master spark service like this.
Added user as root and add mounted volume path for required jars.
see the working block of spark service written in docker compose:
spark:
image: docker.io/bitnami/spark:3
container_name: spark
environment:
- SPARK_MODE=master
- SPARK_RPC_AUTHENTICATION_ENABLED=no
- SPARK_RPC_ENCRYPTION_ENABLED=no
- SPARK_LOCAL_STORAGE_ENCRYPTION_ENABLED=no
- SPARK_SSL_ENABLED=no
user: root
ports:
- '8880:8080'
volumes:
- ./spark-defaults.conf:/opt/bitnami/spark/conf/spark-defaults.conf
- ./jars_dir:/opt/bitnami/spark/ivy:z

Elastic search TestContainers Timed out waiting for URL to be accessible in Docker

Local env:
MacOS 10.14.6
Docker Desktop 2.0.1.2
Docker Engine 19.03.2
Compose Engine 1.24.1
Test containers 1.12.1
I'm using Elastic search in an app, and I want to be able to use TestContainers in my integration tests. Sample code in a Play Framework app that uses ElasticSearch testcontainer:
#BeforeAll
public static void setup() {
private static final ElasticsearchContainer ES = new ElasticsearchContainer();
ES.start();
}
This works when testing locally, but I want to be able to run this inside a Docker container to run on my CI server. I'm getting this exception when running the tests inside the Docker container:
[warn] o.t.u.RegistryAuthLocator - Failure when attempting to lookup auth config (dockerImageName: alpine:3.5, configFile: /root/.docker/config.json. Falling back to docker-java default behaviour. Exception message: /root/.docker/config.json (No such file or directory)
[warn] o.t.u.RegistryAuthLocator - Failure when attempting to lookup auth config (dockerImageName: quay.io/testcontainers/ryuk:0.2.3, configFile: /root/.docker/config.json. Falling back to docker-java default behaviour. Exception message: /root/.docker/config.json (No such file or directory)
?? Checking the system...
? Docker version should be at least 1.6.0
? Docker environment should have more than 2GB free disk space
[warn] o.t.u.RegistryAuthLocator - Failure when attempting to lookup auth config (dockerImageName: docker.elastic.co/elasticsearch/elasticsearch:7.1.1, configFile: /root/.docker/config.json. Falling back to docker-java default behaviour. Exception message: /root/.docker/config.json (No such file or directory)
[error] d.e.c.1.1] - Could not start container
org.testcontainers.containers.ContainerLaunchException: Timed out waiting for URL to be accessible (http://172.17.0.1:32911/ should return HTTP [200])
at org.testcontainers.containers.wait.strategy.HttpWaitStrategy.waitUntilReady(HttpWaitStrategy.java:197)
at org.testcontainers.containers.wait.strategy.AbstractWaitStrategy.waitUntilReady(AbstractWaitStrategy.java:35)
at org.testcontainers.containers.GenericContainer.waitUntilContainerStarted(GenericContainer.java:675)
at org.testcontainers.containers.GenericContainer.tryStart(GenericContainer.java:332)
at org.testcontainers.containers.GenericContainer.lambda$doStart$0(GenericContainer.java:285)
at org.rnorth.ducttape.unreliables.Unreliables.retryUntilSuccess(Unreliables.java:81)
at org.testcontainers.containers.GenericContainer.doStart(GenericContainer.java:283)
at org.testcontainers.containers.GenericContainer.start(GenericContainer.java:272)
at controllers.HomeControllerTest.setup(HomeControllerTest.java:56)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
I've read the instructions here: https://www.testcontainers.org/supported_docker_environment/continuous_integration/dind_patterns/
So my docker-compose.yml looks like (note: I've been testing with another ES container as seen commented out below, but I have not been using it with this test)($INSTANCE is a random 16 char string for a particular test run):
version: '3'
services:
# elasticsearch:
# container_name: elasticsearch_${INSTANCE}
# image: docker.elastic.co/elasticsearch/elasticsearch:6.7.2
# ports:
# - 9200:9200
# - 9300:9300
# command: elasticsearch -E transport.host=0.0.0.0
# logging:
# driver: 'none'
# environment:
# ES_JAVA_OPTS: "-Xms750m -Xmx750m"
mainapp:
container_name: mainapp_${INSTANCE}
image: test_image:${INSTANCE}
stop_signal: SIGKILL
stdin_open: true
tty: true
working_dir: $PWD
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- $PWD:$PWD
environment:
ES_JAVA_OPTS: "-Xms1G -Xmx1G"
command: /bin/bash /projectfolder/build/tests/wrapper.sh
I've also tried running my tests with this command but received the same error:
docker run -it --rm -v $PWD:$PWD -w $PWD -v /var/run/docker.sock:/var/run/docker.sock test_image:68F75D8FD4C7003772C7E52B87B774F5 /bin/bash /testproject/build/tests/wrapper.sh
I tried creating a postgres container the same way inside my testing container and had no issues. I've also tried making a GenericContainer with the Elasticsearch image with no luck.
I don't think this is a connection issue because if I run curl 172.17.0.1:{port printed to test console} from inside my test container, I do get a valid elastic search response with status code 200, so it almost seems like its timing out trying to connect even though the connection is there.
Thanks.

docker-compose issue: Permission denied when attempting to create/mount volume

I have the following docker-compose.yml file:
version: "3"
services:
dbs-poa-loc001d:
image: percona
volumes:
- ./mysql_backup:/var/lib/mysql
- ./create_databases:/docker-entrypoint-initdb.d
hostname: "dbs-poa-loc001d"
container_name: dbs-poa-loc001d
environment:
MYSQL_ALLOW_EMPTY_PASSWORD: "yes"
ports:
- "3306:3306"
networks:
- azion-network
...
When I try to create the dbs-poa-loc001d service (database for the project), I get the following error:
Starting dbs-poa-loc001d ... done
Attaching to dbs-poa-loc001d
dbs-poa-loc001d | Initializing database
dbs-poa-loc001d | mysqld: Can't create/write to file '/var/lib/mysql/is_writable' (Errcode: 13 - Permission denied)
dbs-poa-loc001d | 2019-01-11T01:17:52.060984Z 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).
dbs-poa-loc001d | 2019-01-11T01:17:52.062286Z 0 [ERROR] --initialize specified but the data directory exists and is not writable. Aborting.
dbs-poa-loc001d | 2019-01-11T01:17:52.062299Z 0 [ERROR] Aborting
dbs-poa-loc001d |
dbs-poa-loc001d exited with code 1
This error doesn't happen on my MacOS computer at my job, but in my home computer (running Ubuntu 16.04) it does. I do noticed the mysql_backup folder on the host created to hold the volume data is set to group AND user root. Can anybody tell me what is going on, and how do I fix this? Already tried without success:
Running docker-compose commands using sudo
Manually changing the owner and user of the folder to my actual (low privileged) user.
My current setup and installed versions are:
Ubuntu 16.04
Docker version 18.09.0, build 4d60db4
docker-compose version 1.23.2, build 1110ad0
docker-compose was installed using sudo pip install docker-compose
Can you try to set permissions of mysql_backup to 1001:0?
something like sudo chown -R 1001:0 ./mysql_backup
or as an alternative but only if the folder is empty sudo chmod 777 ./mysql_backup
regarding to percona Dockerfile mysql user id is 1001
https://github.com/percona/percona-docker/blob/master/percona-server.80/Dockerfile

Resources