Neo4j Docker Insufficient Memory - docker

I'm having this weird issue with Neo4j in Docker. This is my docker-compose file:
version: '3'
services:
neo4j:
ports:
- "7473:7473"
- "7474:7474"
- "7687:7687"
volumes:
- neo4j_data:/data
image: neo4j:3.3
volumes:
neo4j_data: {}
I'm using Docker Toolbox on Windows 10. I have tested this on two different machines and it works perfectly. However, on one machine, the container always crashes a few seconds after creation. Here's the log for this container:
$ docker container logs database_neo4j_1
Active database: graph.db
Directories in use:
home: /var/lib/neo4j
config: /var/lib/neo4j/conf
logs: /var/lib/neo4j/logs
plugins: /var/lib/neo4j/plugins
import: /var/lib/neo4j/import
data: /var/lib/neo4j/data
certificates: /var/lib/neo4j/certificates
run: /var/lib/neo4j/run
Starting Neo4j.
2018-11-18 12:50:41.954+0000 WARN Unknown config option: causal_clustering.discovery_listen_address
2018-11-18 12:50:41.965+0000 WARN Unknown config option: causal_clustering.raft_advertised_address
2018-11-18 12:50:41.965+0000 WARN Unknown config option: causal_clustering.raft_listen_address
2018-11-18 12:50:41.967+0000 WARN Unknown config option: ha.host.coordination
2018-11-18 12:50:41.968+0000 WARN Unknown config option: causal_clustering.transaction_advertised_address
2018-11-18 12:50:41.968+0000 WARN Unknown config option: causal_clustering.discovery_advertised_address
2018-11-18 12:50:41.969+0000 WARN Unknown config option: ha.host.data
2018-11-18 12:50:41.970+0000 WARN Unknown config option: causal_clustering.transaction_listen_address
2018-11-18 12:50:42.045+0000 INFO ======== Neo4j 3.3.9 ========
2018-11-18 12:50:42.275+0000 INFO Starting...
2018-11-18 12:50:48.632+0000 INFO Bolt enabled on 0.0.0.0:7687.
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (malloc) failed to allocate 262160 bytes for Chunk::new
# An error report file with more information is saved as:
# /var/lib/neo4j/hs_err_pid6.log
#
# Compiler replay data is saved as:
# /var/lib/neo4j/replay_pid6.log

Looking add the additional log file /var/lib/neo4j/hs_err_pid6.log revealed the following information:
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (malloc) failed to allocate 262160 bytes for Chunk::new
# Possible reasons:
# The system is out of physical RAM or swap space
# In 32 bit mode, the process size limit was hit
# Possible solutions:
# Reduce memory load on the system
# Increase physical memory or swap space
# Check if swap backing store is full
# Use 64 bit Java on a 64 bit OS
# Decrease Java heap size (-Xmx/-Xms)
# Decrease number of Java threads
# Decrease Java thread stack sizes (-Xss)
# Set larger code cache with -XX:ReservedCodeCacheSize=
# This output file may be truncated or incomplete.
#
# Out of Memory Error (allocation.cpp:390), pid=6, tid=0x00007fee96f9bae8
#
# JRE version: OpenJDK Runtime Environment (8.0_181-b13) (build 1.8.0_181-b13)
# Java VM: OpenJDK 64-Bit Server VM (25.181-b13 mixed mode linux-amd64 compressed oops)
# Derivative: IcedTea 3.9.0
# Distribution: Custom build (Tue Oct 23 11:27:22 UTC 2018)
# Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
#
As it turns out, my Docker machine was set to only 1GB of RAM, and the minimum requirement for Neo4j (according to their website) are 2GB. I was able to solve the problem by replacing my default Docker machine according to this guide and giving the new one 4GB of memory.
Essentially, I did the following:
$ docker-machine rm default
$ docker-machine create -d virtualbox --virtualbox-cpu-count=2 --virtualbox-memory=4096 --virtualbox-disk-size=50000 default
you may also need to restart Docker:
docker-machine stop
exit
I haven't found anything about this problem online so far, so maybe this helps someone someday =).

Related

Docker local Nexus responded `Empty reply from server` when Curl

I tried setup a local Nexus using Docker as per the instruction in https://hub.docker.com/r/sonatype/nexus3/
I run the below and seems successful
docker run -d -p 8081:8081 --name nexus sonatype/nexus3
But when I curl http://localhost:8081/
It state
curl: (52) Empty reply from server
Did I miss anything?
UPDATE
Apparently when I run docker logs -f nexus
It shows
java.io.IOException: Function not implemented
at sun.nio.fs.LinuxWatchService.<init>(LinuxWatchService.java:64)
at sun.nio.fs.LinuxFileSystem.newWatchService(LinuxFileSystem.java:47)
at com.install4j.runtime.launcher.FullLauncherHelper.watchDirectory(FullLauncherHelper.java:52)
at com.install4j.runtime.launcher.util.SingleInstance.createStartupListener(SingleInstance.java:108)
at com.install4j.runtime.launcher.util.SingleInstance.check(SingleInstance.java:95)
at com.install4j.runtime.launcher.util.SingleInstance.checkForCurrentLauncher(SingleInstance.java:31)
at com.install4j.runtime.launcher.UnixLauncher.checkSingleInstance(UnixLauncher.java:88)
at com.install4j.runtime.launcher.UnixLauncher.main(UnixLauncher.java:67)
#
# A fatal error has been detected by the Java Runtime Environment:
#
# SIGSEGV (0xb) at pc=0x00000040a5fe0678, pid=1, tid=0x00000040a5fe1700
#
# JRE version: OpenJDK Runtime Environment (8.0_282-b08) (build 1.8.0_282-b08)
# Java VM: OpenJDK 64-Bit Server VM (25.282-b08 mixed mode linux-amd64 compressed oops)
# Problematic frame:
# C 0x00000040a5fe0678
#
# Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
#
# An error report file with more information is saved as:
# /opt/sonatype/nexus/hs_err_pid1.log
Compiled method (c1) 5136 467 3 org.apache.felix.resolver.util.OpenHashMap::mix (12 bytes)
total in heap [0x000000400cf73f10,0x000000400cf74208] = 760
relocation [0x000000400cf74038,0x000000400cf74060] = 40
main code [0x000000400cf74060,0x000000400cf74120] = 192
stub code [0x000000400cf74120,0x000000400cf741b0] = 144
oops [0x000000400cf741b0,0x000000400cf741b8] = 8
metadata [0x000000400cf741b8,0x000000400cf741c0] = 8
scopes data [0x000000400cf741c0,0x000000400cf741d0] = 16
scopes pcs [0x000000400cf741d0,0x000000400cf74200] = 48
dependencies [0x000000400cf74200,0x000000400cf74208] = 8
#
# If you would like to submit a bug report, please visit:
# https://bugzilla.redhat.com/enter_bug.cgi?product=Red%20Hat%20Enterprise%20Linux%208&component=java-1.8.0-openjdk
#
qemu: uncaught target signal 6 (Aborted) - core dumpe
UPDATE
When trying on the MacbookPro Intel machine, all works fine. The issue happens on my MacbookPro M1.
I tried export DOCKER_DEFAULT_PLATFORM=linux/amd64 as per https://stackoverflow.com/a/66900911/3286489, issue still persist
The sonatype nexus 3 compatibility issues persist.
We can use
https://hub.docker.com/r/klo2k/nexus3
for now
docker run -d -p 8081:8081 --name nexus klo2k/nexus3

Kubernetes garbage collection clean docker components

Currently running a k8s cluster however occasionally I get memory issues. The following error will pop up,
Failed create pod sandbox: rpc error: code = Unknown desc = failed to create a sandbox for pod "<web app>": Error response from daemon: devmapper: Thin Pool has 6500 free data blocks which is less than minimum required 7781 free data blocks. Create more free space in thin pool or use dm.min_free_space option to change behavior
I can resolve this by manually running docker ps -a -f status=exited -q | xargs -r docker rm -v
However I want Kubernetes to do this work itself. Currently in my kublet config I have:
evictionHard:
imagefs.available: 15%
memory.available: "100Mi"
nodefs.available: 10%
nodefs.inodesFree: 5%
imageGCHighThresholdPercent: 85
imageGCLowThresholdPercent: 80
What am i doing wrong?
Reading the error you've posted seems to me you are using "devicemapper" as storage driver.
The devicemapper storage driver is deprecated in Docker Engine 18.09, and will be removed in a future release. It is recommended that users of the devicemapper storage driver migrate to overlay2.
I should suggest you use "overlay2" as storage drive, unless you are running a non-support OS. See here the support OS versions.
You can check your actual storage drive using docker info command, you will get an output like this:
Client:
Debug Mode: false
Server:
Containers: 21
Running: 18
Paused: 0
Stopped: 3
Images: 11
Server Version: 19.03.5
Storage Driver: devicemapper <<== See here
Pool Name: docker-8:1-7999625-pool
Pool Blocksize: 65.54kB
...
>
Supposing you want to change the storage drive from devicemapper to overlay2, you need to following this steps:
Changing the storage driver makes existing containers and images inaccessible on the local system. Use docker save to save any images you have built or push them to Docker Hub or a private registry before changing the storage driver, so that you do not need to re-create them later.
Before following this procedure, you must first meet all the prerequisites.
Stop Docker.
$ sudo systemctl stop docker
Copy the contents of /var/lib/docker to a temporary location.
$ cp -au /var/lib/docker /var/lib/docker.bk
If you want to use a separate backing filesystem from the one used by /var/lib/, format the filesystem and mount it into /var/lib/docker. Make sure add this mount to /etc/fstab to make it permanent.
Edit /etc/docker/daemon.json. If it does not yet exist, create it. Assuming that the file was empty, add the following contents.
{
"storage-driver": "overlay2"
}
Docker does not start if the daemon.json file contains badly-formed JSON.
Start Docker.
$ sudo systemctl start docker
Verify that the daemon is using the overlay2 storage driver. Use the docker info command and look for Storage Driver and Backing filesystem.
Client:
Debug Mode: false
Server:
Containers: 35
Running: 15
Paused: 0
Stopped: 20
Images: 11
Server Version: 19.03.5
Storage Driver: overlay2 <=== HERE
Backing Filesystem: extfs <== HERE
Supports d_type: true
Extracted from Docker Documentation.

Can I run k8s master INSIDE a docker container? Getting errors about k8s looking for host's kernel details

In a docker container I want to run k8s.
When I run kubeadm join ... or kubeadm init commands I see sometimes errors like
\"modprobe: ERROR: ../libkmod/libkmod.c:586 kmod_search_moddep() could
not open moddep file
'/lib/modules/3.10.0-1062.1.2.el7.x86_64/modules.dep.bin'.
nmodprobe:
FATAL: Module configs not found in directory
/lib/modules/3.10.0-1062.1.2.el7.x86_64",
err: exit status 1
because (I think) my container does not have the expected kernel header files.
I realise that the container reports its kernel based on the host that is running the container; and looking at k8s code I see
// getKernelConfigReader search kernel config file in a predefined list. Once the kernel config
// file is found it will read the configurations into a byte buffer and return. If the kernel
// config file is not found, it will try to load kernel config module and retry again.
func (k *KernelValidator) getKernelConfigReader() (io.Reader, error) {
possibePaths := []string{
"/proc/config.gz",
"/boot/config-" + k.kernelRelease,
"/usr/src/linux-" + k.kernelRelease + "/.config",
"/usr/src/linux/.config",
}
so I am bit confused what is simplest way to run k8s inside a container such that it consistently past this getting the kernel info.
I note that running docker run -it solita/centos-systemd:7 /bin/bash on a macOS host I see :
# uname -r
4.9.184-linuxkit
# ls -l /proc/config.gz
-r--r--r-- 1 root root 23834 Nov 20 16:40 /proc/config.gz
but running exact same on a Ubuntu VM I see :
# uname -r
4.4.0-142-generic
# ls -l /proc/config.gz
ls: cannot access /proc/config.gz
[Weirdly I don't see this FATAL: Module configs not found in directory error every time, but I guess that is a separate question!]
UPDATE 22/November/2019. I see now that k8s DOES run okay in a container. Real problem was weird/misleading logs. I have added an answer to clarify.
I do not believe that is possible given the nature of containers.
You should instead test your app in a docker container then deploy that image to k8s either in the cloud or locally using minikube.
Another solution is to run it under kind which uses docker driver instead of VirtualBox
https://kind.sigs.k8s.io/docs/user/quick-start/
It seems the FATAL error part was a bit misleading.
It was badly formatted by my test environment (all on one line.
When k8s was failing I saw the FATAL and assumed (incorrectly) that was root cause.
When I format the logs nicely I see ...
kubeadm join 172.17.0.2:6443 --token 21e8ab.1e1666a25fd37338 --discovery-token-unsafe-skip-ca-verification --experimental-control-plane --ignore-preflight-errors=all --node-name 172.17.0.3
[preflight] Running pre-flight checks
[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 4.4.0-142-generic
DOCKER_VERSION: 18.09.3
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.3. Latest validated version: 18.06
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: ERROR: ../libkmod/libkmod.c:586 kmod_search_moddep() could not open moddep file '/lib/modules/4.4.0-142-generic/modules.dep.bin'\nmodprobe: FATAL: Module configs not found in directory /lib/modules/4.4.0-142-generic\n", err: exit status 1
[discovery] Trying to connect to API Server "172.17.0.2:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://172.17.0.2:6443"
[discovery] Failed to request cluster info, will try again: [the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps cluster-info)]
There are other errors later, which I originally though were a side-effect of the nasty looking FATAL error e.g. .... "[util/etcd] Attempt timed out"]} but I now think root cause is Etcd part times out sometimes.
Adding this answer in case someone else puzzled like I was.

Elastic in docker stack/swarm

I have swarm of two nodes
[ra#speechanalytics-test ~]$ docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
mlwwmkdlzbv0zlapqe1veq3uq speechanalytics-preprod Ready Active 18.09.3
se717p88485s22s715rdir9x2 * speechanalytics-test Ready Active Leader 18.09.3
I am trying to run container with elastic in stack. Here is my docker-compose.yml file
version: '3.4'
services:
elastic:
image: docker.elastic.co/elasticsearch/elasticsearch:6.7.0
environment:
- cluster.name=single-node
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- esdata:/usr/share/elasticsearch/data
deploy:
placement:
constraints:
- node.hostname==speechanalytics-preprod
volumes:
esdata:
driver: local
after start with docker stack
docker stack deploy preprod -c docker-compose.yml
container crashes in 20 seconds
docker service logs preprod_elastic
...
| OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
| OpenJDK 64-Bit Server VM warning: UseAVX=2 is not supported on this CPU, setting it to UseAVX=0
| [2019-04-03T16:41:30,044][WARN ][o.e.b.JNANatives ] [unknown] Unable to lock JVM Memory: error=12, reason=Cannot allocate memory
| [2019-04-03T16:41:30,049][WARN ][o.e.b.JNANatives ] [unknown] This can result in part of the JVM being swapped out.
| [2019-04-03T16:41:30,049][WARN ][o.e.b.JNANatives ] [unknown] Increase RLIMIT_MEMLOCK, soft limit: 16777216, hard limit: 16777216
| [2019-04-03T16:41:30,050][WARN ][o.e.b.JNANatives ] [unknown] These can be adjusted by modifying /etc/security/limits.conf, for example:
| OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
| # allow user 'elasticsearch' mlockall
| OpenJDK 64-Bit Server VM warning: UseAVX=2 is not supported on this CPU, setting it to UseAVX=0
| elasticsearch soft memlock unlimited
| [2019-04-03T16:41:02,949][WARN ][o.e.b.JNANatives ] [unknown] Unable to lock JVM Memory: error=12, reason=Cannot allocate memory
| elasticsearch hard memlock unlimited
| [2019-04-03T16:41:02,954][WARN ][o.e.b.JNANatives ] [unknown] This can result in part of the JVM being swapped out.
| [2019-04-03T16:41:30,050][WARN ][o.e.b.JNANatives ] [unknown] If you are logged in interactively, you will have to re-login for the new limits to take effect.
| [2019-04-03T16:41:02,954][WARN ][o.e.b.JNANatives ] [unknown] Increase RLIMIT_MEMLOCK, soft limit: 16777216, hard limit: 16777216
preprod
on both nodes I have
ra#speechanalytics-preprod:~$ sysctl vm.max_map_count
vm.max_map_count = 262144
Any ideas how to fix ?
The memlock errors you're seeing from Elasticsearch is a common issue not unique to having used Docker, but occurs when Elasticsearch is told to lock its memory, but is unable to do so. You can circumvent the error by removing the following environment variable from the docker-compose.yml file:
- bootstrap.memory_lock=true
Memlock may be used with Docker Swarm Mode, but with some caveats.
Not all options that work with docker-compose (Docker Compose) work with docker stack deploy (Docker Swarm Mode), and vice versa, despite both sharing the docker-compose YAML syntax. One such option is ulimits:, which when used with docker stack deploy, will be ignored with a warning message, like so:
Ignoring unsupported options: ulimits
My guess is that with your docker-compose.yml file, Elasticsearch runs fine with docker-compose up, but not with docker stack deploy.
With Docker Swarm Mode, by default, the Elasticsearch instance as you have defined will have trouble with memlock. Currently, setting of ulimits for docker swarm services is not yet officially supported. There are ways to get around the issue, though.
If the host is Ubuntu, unlimited memlock can be enabled across the docker service (see here and here). This can be achieved via the commands:
echo -e "[Service]\nLimitMEMLOCK=infinity" | SYSTEMD_EDITOR=tee systemctl edit docker.service
systemctl daemon-reload
systemctl restart docker
However, setting memlock to infinity is not without its drawbacks, as spelt out by Elastic themselves here.
Based on my testing, the solution works on Docker 18.06, but not on 18.09. Given the inconsistency and the possibility of Elasticsearch failing to start, the better option would be to not use memlock with Elasticsearch when deploying on Swarm. Instead, you can opt for any of the other methods mentioned in Elasticsearch Docs to achieve similar results.

Docker toolbox cannot allocate memory

I'm trying to get Docker running a container locally on my mac that I've been working on in the cloud. I did the docker commit/save/load find. But when I got to run it locally after I installed docker toolbox I get this error
docker logs es-loaded-with-data
Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x00000006c5330000, 4207738880, 0) failed; error='Cannot allocate memory' (errno=12)
Starting elasticsearch: #
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 4207738880 bytes for committing reserved memory.
# An error report file with more information is saved as:
# //hs_err_pid16.log
Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x00000006c5330000, 4207738880, 0) failed; error='Cannot allocate memory' (errno=12)
Starting elasticsearch: #
If I do docker info
then I get
Total Memory: 1.956 GiB
clearly 2 Gb isn't enough. How do I increase it so my container will run?
Docker on Mac OS runs inside a virtualbox VM with either docker-machine (or older boot2docker). I am not sure if docker-machine supports modifying the VM RAM directly, but you can probably just start the VirtualBox.app and modify the amount of VM Memory directly. Restart the VM et voilá.
Restarting the docker service solved the problem for me.

Resources