Allocating more memory to Docker Enterprise Preview Edition - docker

I am running Docker Enterprise Preview Edition on Windows Server 2019 and have managed to pull and run the docker-compose.yml file below. However shortly afterwards the container shuts down and when I run the command docker-compose logs it shows me the insufficient memory issue below:
Docker-compose file
version: '3.7'
services:
elasticsearch:
container_name: elasticsearch
# image: docker.elastic.co/elasticsearch/elasticsearch:7.9.2
image: docker.elastic.co/elasticsearch/elasticsearch:7.17.1
deploy:
resources:
limits:
cpus: 0.25
memory: 4096m
ports:
- 9200:9200
volumes:
- C:\DockerContainers\Elasticsearch\data:/usr/share/elasticsearch/data
- C:\DockerContainers\Elasticsearch\config\certs:/usr/share/elasticsearch/config/certs
environment:
- xpack.monitoring.enabled=true
- xpack.watcher.enabled=true
- ES_JAVA_OPTS=-Xms512m -Xmx512m
- discovery.type=single-node
# networks:
# - elastic
kibana:
container_name: kibana
# image: docker.elastic.co/kibana/kibana:7.9.2
image: docker.elastic.co/kibana/kibana:7.17.1
deploy:
resources:
limits:
cpus: 0.25
memory: 4096m
ports:
- 5601:5601
volumes:
- C:\DockerContainers\Elasticsearch\Kibana\config\certs:/usr/share/kibana/config/certs
depends_on:
- elasticsearch
# networks:
# - elastic
# networks:
# elastic:
# driver: nat
Docker logs
elasticsearch | # There is insufficient memory for the Java Runtime Environment to continue.
elasticsearch | # Native memory allocation (mmap) failed to map 65536 bytes for committing reserved memory.
elasticsearch | # An error report file with more information is saved as:
elasticsearch | # logs/hs_err_pid7.log
I read on the elasticsearch Docker guideline that it needs at least 4GB RAM. I have included the RAM limit in the docker compose yml file but it doesn't seem to take effect. Does anyone know how to set the memory usage for Docker which is running on Windows Server 2019?

I ran into this same issue trying to start Docker using almost exactly the same configuration as you, but doing so from Windows 10 instead of Windows Server 2019. I suspect the issue isn't the memory configuration for the Elastic containers, but rather the Docker host itself needs to be tweaked.
I'm not sure how to go about increasing memory for the Docker host when running Docker Enterprise Preview Edition, but for something like Docker Desktop, this can be done by changing the memory limit afforded to it by adjusting the Settings -> Advanced -> Memory slider. Docker Desktop may need to be restarted afterwards for this change to take effect.
Similarly, if you are running a Docker machine, like I am, it may be necessary to recreate it, but with more than the default 1GB that is allotted to new Docker machine instances. Something like this:
docker-machine create -d vmwareworkstation --vmwareworkstation-memory-size 8192 default
Swap in whatever vm type makes sense for you (e.g., VirtualBox) and remember that you need to run docker-machine env | Invoke-Expression in each console window in which you intend to run Docker commands.
I saw definite improvement after giving the Docker host more breathing room, as the memory-related errors disappeared. However, the containers still failed to start due to the following error:
max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
This is a known issue (see https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html#_set_vm_max_map_count_to_at_least_262144).
Again, if you're using Docker Desktop with WSL2 installed, you can run this:
wsl -d docker-desktop sysctl -w vm.max_map_count=262144
sysctl -w vm.max_map_count=262144
For a Docker machine, you'll need to ssh into it and set the vm.max_map_count directly:
docker-machine env | Invoke-Expression
docker-machine ssh default
sudo sysctl -w vm.max_map_count=262144
The above change to the Docker machine will last during your current session but will be lost after reboot. You can make it stick by:
adding sysctl -w vm.max_map_count=262144 to the /var/lib/boot2docker/bootlocal.sh file. Note this file may not exist before your changes.
running chmod +x /var/lib/boot2docker/bootlocal.sh
exit ssh and restart the Docker machine via docker-machine restart default.
to confirm the change, run docker-machine ssh default sudo sysctl vm.max_map_count. You should see it set to 262144.
After these changes, I was able to bring the elastic containers up. You can smoke test this by issuing a GET request for http://localhost:9200 in either Postman or curl. If you're using Docker machine, the ports you've set up are accessible to the machine, but not to your Windows box. To be able to connect to port 9200, you'll need to set up port forwarding via the docker-machine ssh -f -N -L 9200:localhost:9200 command.
Also, consider adding the following to your Docker compose file:
environment:
- bootstrap.memory_lock=true
ulimits:
memlock:
soft: -1
hard: -1
Hope this helps!

This is a late response but still if it's useful for anyone out there.
In fact in windows server, Docker desktop is the best option for running Docker. If you use Docker enterprise edition in windows server 2019, it has some restrictions in memory (RAM) allocation to container if you have not purchased the license keys to operate. Max to max it will allocate 9.7Mb of memory to container. You can confirm this by using below command
Docker stats <container id>
Here you cannot see the MEM_USAGE/LIMIT column at all in the output like Docker desktop.
So when your container request memory more than 9.7Mb, it will go down. Also you cannot use the deploy option in compose file to reserve memory for specific container in case of enterprise edition in windows server 2019. It will throw error 'memory reservation not supported'. However mem_limit will accept by the compose file but again it will not allocate the mentioned memory.
Note - The advantage of Docker enterprise edition is that it will always run in the background even if the user log off the server. But for Docker desktop, it will stop running upon user log off.

Related

Docker exiting immediately

I am running docker on my
windows 10 home
machine. So it is the older version of docker not the hyper v version.
I have setup a sql server docker container however when I run it it exits with the error
Exited (1)
When I look at the logs it says
sqlservr: This program requires a machine with at least 2000 megabytes
of memory. /opt/mssql/bin/sqlservr: This program requires a machine
with at least 2000 megabytes of memory.
However I have 8Gb of memory on my machine and I have at least 3.5 Gb free when running docker. I have tried using the --memory flag to allocate over 2Gb for the container (as the docs state that it needs 2Gb for the sql server image) but it still exits...
Does anyone know what is potentially the issue?
I had the same issue and I got it solved by using this docker image
https://hub.docker.com/r/justin2004/mssql_server_tiny
this is my docker-compose file:
services:
db:
image: justin2004/mssql_server_tiny
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=#P1ssword#
ports:
- '1433:1433'
expose:
- 1433
create .wslconfig file in your user folder
[wsl2]
memory=4GB # Limits VM memory in WSL 2 up to GB
processors=2 # Makes the WSL 2 VM use two virtual processors

docker stack ignoring unsupported options

I am running docker Server Version: 18.06.0-ce on centos 7.5.
I have a docker-compose file running db2 server with the following sample definition:
The docker-compose file has the following options:
version: "3.7"
services:
db2exp:
image: db2
ports:
- "50000:50000"
networks:
- lmnet
ipc: host
cap_add:
- IPC_LOCK
- IPC_OWNER
environment:
- DB2INSTANCE=db2inst1
- DB2PASSWD=db2inst1
- LICENSE=accept
volumes:
- db2data:/home
When using docker-compose up, I do not have problems with starting the db2 service. However when I try to use docker stack, I get the following message:
docker stack deploy test --compose-file docker-compose.yml
Ignoring unsupported options: cap_add, ipc
This renders db2start to return SQL1042C An unexpected system error occurred.
It would be ideal if what runs in compose runs in stack. What, if any, can be done so that the db2 container can be used in a docker stack environment and not just docker-compose?
If it matters, I have docker-compose version 1.23.0-rc1, build 320e4819.
Thanks in advance.
This is not supported by swarm mode currently as the error message you've show and documentation identify. Personally I'd question whether you really want to have your database running in swarm mode. Docker does not migrate the volume for you, so you wouldn't see your data if rescheduled on another node.
You can follow the progress of getting this added to Swarm Mode in the github issues, there are several, including:
https://github.com/moby/moby/issues/24862
https://github.com/moby/moby/issues/25885
The hacky solution I've seen if you really need this run from swarm mode is to schedule a container with the docker socket mounted and docker binaries in the image, which then executes a docker run command directly against the local engine. E.g.:
version: "3.7"
services:
db2exp-wrapper:
image: docker:stable
volumes:
- /var/run/docker.sock:/var/run/docker.sock
command: docker run --rm --cap-add IPC_LOCK --cap-add IPC_OWNER -p 50000:50000 ... db2
I don't really recommend the above solution, sticking with docker-compose would likely be a better implementation for your use case. Downsides of this solution include only publishing the port on the single host, and potential security risks of anyone else with access to that docker socket.

Docker-compose top level volume unable to find path

I have a pretty simple docker-compose setup which is working on my colleague computer (*), but for some obscure reason it doesn't work on mine.
Here is my docker-compose.yml
version: '3.3'
services:
... there are other services that are starting successfully ...
reporting:
image: microsoft/dotnet:2.0-runtime
hostname: reporting
container_name: reporting
volumes:
- publish-output:/app
command: dotnet /app/MocksGenerator.dll -s ${MSNAME_R} -p ${MSPORT_R} -c http://${CHOST} -m http://${MBHOST}${MSNAME_R}:${MBPORT}
networks:
- consul
links:
- mbreporting
- consul
- fabio
depends_on:
- mbreporting
- consul
- fabio
networks:
consul:
volumes:
publish-output:
driver: local
driver_opts:
device: /mnt/d/Repositories/microservices.mocking/docker/PublishOutput
o: bind
What I recieve as error from docker-compose when I try to start it using "docker-compose up".
ERROR: for reporting Cannot start service reporting: error while mounting volume '/var/lib/docker/volumes/betsreporting_publish-output/_data': error while mounting volume with options: type='' device='/mnt/d/Repositories/microservices.mocking/docker/PublishOutput' o='bind': no such file or directory
Running ls -la /mnt/d/Repositories/microservices.mocking/docker yields
drwxrwxrwx 0 root root 4096 May 30 16:12 PublishOutput
So host directory exists for sure, but docker-compose can't seem to find it for some reason. Why?
(*) My colleague is using volume of type bind, I tried with that, also didn't work for the same reason so I've decided to change the volume type, but then it seems like the root problem is that docker-compose can't seem to find the host directory.
After reset of Docker daemon credentials sharing for device window prompted and then after re-sharing the disk it started working again, even tho it was previously shared as well. I suspect that sharing of disk to Docker does not apply to directories created AFTER sharing was done (thus re-sharing was needed) but I am not entirely sure, will check that with docker engine guys.
One more thing, I was trying also to run it from Linux subsystem on Windows and it didn't work, I suspect that again permissions of Linux subsystem and Windows might be not matching or docker engine might have a bug, cause even after re-sharing error persisted so I had to run it from Powershell instead.

Why is my Docker volume not working in a remote build box?

I am attempting to add a volume to a Docker container that will be built and run in a Docker Compose system on a hosted build service (CircleCI). It works fine locally, but not remotely. CircleCI provide an SSH facility I can use to debug why a container is not behaving as expected.
The relevant portion of the Docker Compose file is thus:
missive-mongo:
image: missive-mongo
command: mongod -v --logpath /var/log/mongodb/mongodb.log --logappend
volumes:
- ${MONGO_LOCAL}:/data/db
- ${LOGS_LOCAL_PATH}/mongo:/var/log/mongodb
networks:
- storage_network
Locally, if I do docker inspect integration_missive-mongo_1 (i.e. the running container name, I will get the volumes as expected:
...
"HostConfig": {
"Binds": [
"/tmp/missive-volumes/logs/mongo:/var/log/mongodb:rw",
"/tmp/missive-volumes/mongo:/data/db:rw"
],
...
On the same container, I can shell in and see that the volume works fine:
docker exec -it integration_missive-mongo_1 sh
/ # tail /var/log/mongodb/mongodb.log
2017-11-28T22:50:14.452+0000 D STORAGE [initandlisten] admin.system.version: clearing plan cache - collection info cache reset
2017-11-28T22:50:14.452+0000 I INDEX [initandlisten] build index on: admin.system.version properties: { v: 2, key: { version: 1 }, name: "incompatible_with_version_32", ns: "admin.system.version" }
2017-11-28T22:50:14.452+0000 I INDEX [initandlisten] building index using bulk method; build may temporarily use up to 500 megabytes of RAM
2017-11-28T22:50:14.452+0000 D INDEX [initandlisten] bulk commit starting for index: incompatible_with_version_32
2017-11-28T22:50:14.452+0000 D INDEX [initandlisten] done building bottom layer, going to commit
2017-11-28T22:50:14.454+0000 I INDEX [initandlisten] build index done. scanned 0 total records. 0 secs
2017-11-28T22:50:14.455+0000 I COMMAND [initandlisten] setting featureCompatibilityVersion to 3.4
2017-11-28T22:50:14.455+0000 I NETWORK [thread1] waiting for connections on port 27017
2017-11-28T22:50:14.455+0000 D COMMAND [PeriodicTaskRunner] BackgroundJob starting: PeriodicTaskRunner
2017-11-28T22:50:14.455+0000 D COMMAND [ClientCursorMonitor] BackgroundJob starting: ClientCursorMonitor
OK, now for the remote. I kick off a build, it fails because Mongo won't start, so I use the SSH facility that keeps a box alive after a failed build.
I first hack the DC file so that it does not try to launch Mongo, as it will fail. I just get it to sleep instead:
missive-mongo:
image: missive-mongo
command: sleep 1000
volumes:
- ${MONGO_LOCAL}:/data/db
- ${LOGS_LOCAL_PATH}/mongo:/var/log/mongodb
networks:
- storage_network
I then run the docker-compose up script to bring all containers up, and then examine the problematic box: docker inspect integration_missive-mongo_1:
"HostConfig": {
"Binds": [
"/tmp/missive-volumes/logs/mongo:/var/log/mongodb:rw",
"/tmp/missive-volumes/mongo:/data/db:rw"
],
That looks fine. So on the host I create a dummy log file, and list it to prove it is there:
bash-4.3# ls /tmp/missive-volumes/logs/mongo
mongodb.log
So I try shelling in, docker exec -it integration_missive-mongo_1 sh again. This time I find that the folder exists, but not the volume contents:
/ # ls /var/log
mongodb
/ # ls /var/log/mongodb/
/ #
This is very odd, because the reliability of volumes in the remote Docker/Compose config has been exemplary up until now.
Theories
My main one at present is that the differing versions of Docker and Docker Compose could have something to do with it. So I will list out what I have:
Local
Host: Linux Mint
Docker version 1.13.1, build 092cba3
docker-compose version 1.8.0, build unknown
Remote
Host: I suspect it is Alpine (it uses apk for installing)
I am using the docker:17.05.0-ce-git image supplied by CircleCI, the version shows as Docker version 17.05.0-ce, build 89658be
Docker Composer is installed via Pip, and getting the version produces docker-compose version 1.13.0, build 1719ceb.
So, there is some version discrepancy. As a shot in the dark, I could try bumping up Docker/Compose, though I am wary of breaking other things.
What would be ideal though, is some sort of advanced Docker commands I can use to debug why the volume appears to be registered but is not exposed inside the container. Any ideas?
CircleCI runs docker-compose remotely from the Docker daemon so local bind mounts don't work.
A named volume will default to the local driver and would work in CircleCI's Compose setup, the volume will exist where ever the container runs.
Logging should generally be left to stdout and stderr in a single process per container setup. Then you can make use of a logging driver plugin to ship to a central collector. MongoDB defaults to logging to stdout/stderr when run in the foreground.
Combining the volumes and logging:
version: "2.1"
services:
syslog:
image: deployable/rsyslog
ports:
- '1514:1514/udp'
- '1514:1514/tcp'
mongo:
image: mongo
command: mongod -v
volumes:
- 'mongo_data:/data/db'
depends_on:
- syslog
logging:
options:
tag: '{{.FullID}} {{.Name}}'
syslog-address: "tcp://10.8.8.8:1514"
driver: syslog
volumes:
mongo_data:
This is a little bit of a hack as the logging endpoint would normally be external, rather than a container in the same group. This is why the logging uses the external address and port mapping to access the syslog server. This connection is between the docker daemon and the log server, rather than container to container.
I wanted to add an additional answer to accompany the accepted one. My use-case on CircleCI is to run browser-based integration tests, in order to check that a whole stack is working correctly. A number of the 11 containers in use have volumes defined for various things, such as log output and raw database file storage.
What I had not realised until now was that the volumes in CircleCI's Docker executor do not work, as a result of a technical Docker limitation. As a result of this failure, in each case previously, the files were just written to an empty folder.
In my new case however, this issue was causing Mongo to fail. The reason for that was that I'm using --logappend to prevent Mongo from doing its own log rotation on start-up, and this switch requires the path specified in --logpath to exist. Since it existed on the host, but the volume creation failed, the container could not see the log file.
To fix this, I have modified my Mongo service entry to call a script in the command section:
missive-mongo:
image: missive-mongo
command: sh /root/mongo-logging.sh
And the script looks like this:
#!/bin/sh
#
# The command sets up logging in Mongo. The touch is for the benefit of any
# environment in which the logs do not already exist (e.g. Integration, since
# CircleCI does not support volumes)
touch /var/log/mongodb/mongodb.log \
&& mongod -v --logpath /var/log/mongodb/mongodb.log --logappend
In the two possible use cases, this will act as follows:
In the case of the mount working (dev, live) it will simply touch a file if it exists, and create it if it does not (e.g. a completely new environment),
In the case of the mount not working (CircleCI) it will create the file.
Either way, this is a nice safety feature to prevent Mongo blowing up.

Docker mac symfony 3 very slow

I'm starting a new project with Symfony 3 and I want to use Docker for the development environment. We will work on this project with a dozen developers so I want to have an easy install process.
Here's my docker-compose.yml
version: '2'
services:
db:
image: mysql
ports:
- "3307:3306"
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: mydb
MYSQL_USER: root
MYSQL_PASSWORD: root
php:
build: ./php-fpm
expose:
- "9001"
volumes:
- .:/var/www/project
- ./var/logs:/var/www/project/app/logs
links:
- db
nginx:
build: ./nginx
ports:
- "8001:80"
links:
- php
volumes_from:
- php
volumes:
- ./var/logs/nginx/:/var/log/nginx
I installed the recent Docker for Mac application (beta). The big issue is that my symfony app is very very slow (a simple page takes more than 5 seconds). The same app with MAMP is much faster (500ms max). Is this a know issue of Docker ? How can I debug it ?
This is a known issue. Your local file system is being mounted in the Docker for Mac linux VM with osxfs, there is some additional latency when reading and writing these mounted files. For small applications this isn't too noticeable, but for larger applications that could read thousands of files on a single request it is can slow things down significantly.
Sorry for the late answer but you could install Docker CE Edge, because it supports cache mode.
Download Docker-Edge (waiting for the stable version of docker that will support cached mode)
Add the following line to your docker-compose.yml file
Blockquote
php:
volumes:
- ${SYMFONY_APP_PATH}:/var/www/symfony:cached
Replace ${SYMFONY_APP_PATH} by your own path.
Actually I'm using docker to run projects locally. To run Docker faster I used the below setup:
MAC OSX:
Docker Toolbox
Install normaly the dmg file.
Open your terminal and type:
`$ docker-machine create --driver virtualbox default `
`$ docker-machine env default`
`eval "$(docker-machine env default)"`
Now you have the docker-machine up and running, any docker-compose, docker command will run "inside the machine".
In our case "Symfony" is a large application. The docker-machine file system is under osxfs, so the application will be very slow.
docker-machine-nfs
Install with:
curl -s https://raw.githubusercontent.com/adlogix/docker-machine-nfs/master/docker-machine-nfs.sh | sudo tee /usr/local/bin/docker-machine-nfs > /dev/null && \ sudo chmod +x /usr/local/bin/docker-machine-nfs
Running
It will be necessary to type the root password
$ docker-machine-nfs default
Now your docker-machine is running under the nfs file system.
The speed will be regular.
Mapping your docker-machine to localhost
Regulary the docker container will run under 192.168.99.100:9000
Running on terminal:
$ vboxmanage modifyvm default --natpf1 "default-map,tcp,,9000,,9000'
You can access from localhost:9000
It's possible to get performance with Docker for Mac almost as fast as native shared volumes with Linux by using Mutagen. A benchmark is available here.
I created a full example for a Symfony project, it can be used for any type of project in any language.
I had a similar problem. In my case I was running a python script within a docker container and it was really slow. The way I solved this is using the "old" docker-toolbox.
It's not ideal, but worked for me
I have a detailed solution to this problem in my answer here, docker on OSX slow volumes, please check it out.
I got it where there is no slow downs and no extra software to install.
Known issue
This is known issue https://forums.docker.com/t/file-access-in-mounted-volumes-extremely-slow-cpu-bound/8076.
I won't recommend https://www.docker.com/products/docker-toolbox if you have https://www.docker.com/docker-mac.
Docker for Mac does not use VirtualBox, but rather HyperKit, a
lightweight macOS virtualization solution built on top of
Hypervisor.framework in macOS 10.10 Yosemite and higher.
https://docs.docker.com/docker-for-mac/docker-toolbox/#the-docker-for-mac-environment
My workaround
I have created workaround which may help you. I use http://docker-sync.io/ for my symfony project. Before using docker-sync page was loading 30 sec, now it's below 1 sec - https://github.com/Arkowsky/docker_symfony

Resources