How to run tensorflow with gpu support in docker-compose? - docker

I want to create some neural network in tensorflow 2.x that trains on a GPU and I want to set up all the necessary infrastructure inside a docker-compose network (assuming that this is actually possible for now). As far as I know, in order to train a tensorflow model on a GPU, I need the CUDA toolkit and the NVIDIA driver. To install these dependencies natively on my computer (OS: Ubuntu 18.04) is always quite a pain, as there are many version dependencies between tensorflow, CUDA and the NVIDIA driver. So, I was trying to find a way how to create a docker-compose file that contains a service for tensorflow, CUDA and the NVIDIA driver, but I am getting the following error:
# Start the services
sudo docker-compose -f docker-compose-test.yml up --build
Starting vw_image_cls_nvidia-driver_1 ... done
Starting vw_image_cls_nvidia-cuda_1 ... done
Recreating vw_image_cls_tensorflow_1 ... error
ERROR: for vw_image_cls_tensorflow_1 Cannot start service tensorflow: OCI runtime create failed: container_linux.go:346: starting container process caused "exec: \"import\": executable file not found in $PATH": unknown
ERROR: for tensorflow Cannot start service tensorflow: OCI runtime create failed: container_linux.go:346: starting container process caused "exec: \"import\": executable file not found in $PATH": unknown
ERROR: Encountered errors while bringing up the project.
My docker-compose file looks as follows:
# version 2.3 is required for NVIDIA runtime
version: '2.3'
services:
nvidia-driver:
# NVIDIA GPU driver used by the CUDA Toolkit
image: nvidia/driver:440.33.01-ubuntu18.04
environment:
- NVIDIA_VISIBLE_DEVICES=all
volumes:
# Do we need this volume to make the driver accessible by other containers in the network?
- nvidia_driver:/usr/local/nvidai/:ro # Taken from here: http://collabnix.com/deploying-application-in-the-gpu-accelerated-data-center-using-docker/
networks:
- net
nvidia-cuda:
depends_on:
- nvidia-driver
image: nvidia/cuda:10.1-base-ubuntu18.04
volumes:
# Do we need the driver volume here?
- nvidia_driver:/usr/local/nvidai/:ro # Taken from here: http://collabnix.com/deploying-application-in-the-gpu-accelerated-data-center-using-docker/
# Do we need to create an additional volume for this service to be accessible by the tensorflow service?
devices:
# Do we need to list the devices here, or only in the tensorflow service. Taken from here: http://collabnix.com/deploying-application-in-the-gpu-accelerated-data-center-using-docker/
- /dev/nvidiactl
- /dev/nvidia-uvm
- /dev/nvidia0
networks:
- net
tensorflow:
image: tensorflow/tensorflow:2.0.1-gpu # Does this ship with cuda10.0 installed or do I need a separate container for it?
runtime: nvidia
restart: always
privileged: true
depends_on:
- nvidia-cuda
environment:
- NVIDIA_VISIBLE_DEVICES=all
volumes:
# Volumes related to source code and config files
- ./src:/src
- ./configs:/configs
# Do we need the driver volume here?
- nvidia_driver:/usr/local/nvidai/:ro # Taken from here: http://collabnix.com/deploying-application-in-the-gpu-accelerated-data-center-using-docker/
# Do we need an additional volume from the nvidia-cuda service?
command: import tensorflow as tf; print(tf.reduce_sum(tf.random.normal([1000, 1000]))); print("SUCCESS")
devices:
# Devices listed here: http://collabnix.com/deploying-application-in-the-gpu-accelerated-data-center-using-docker/
- /dev/nvidiactl
- /dev/nvidia-uvm
- /dev/nvidia0
- /dev/nvidia-uvm-tools
networks:
- net
volumes:
nvidia_driver:
networks:
net:
driver: bridge
And my /etc/docker/daemon.json file looks as follows:
{"default-runtime":"nvidia",
"runtimes": {
"nvidia": {
"path": "/usr/bin/nvidia-container-runtime",
"runtimeArgs": []
}
}
}
So, it seems like the error is somehow related to configuring the nvidia runtime, but more importantly, I am almost certain that I didn't set up my docker-compose file correctly. So, my questions are:
Is it actually possible to do what I am trying to do?
If yes, did I setup my docker-compose file correctly (see comments in docker-compose.yml)?
How do I fix the error message I received above?
Thank you very much for your help, I highly appreciate it.

I agree that installing all tensorflow-gpu dependencies is rather painful. Fortunately, it's rather easy with Docker, as you only need NVIDIA Driver and NVIDIA Container Toolkit (a sort of a plugin). The rest (CUDA, cuDNN) Tensorflow images have inside, so you don't need them on the Docker host.
The driver can be deployed as a container too, but I do not recommend that for a workstation. It is meant to be used on servers where there is no GUI (X-server, etc). The subject of containerized driver is covered at the end of this post, for now let's see how to start tensorflow-gpu with docker-compose. The process is the same regardless of whether you have the driver in container or not.
How to launch Tensorflow-GPU with docker-compose
Prerequisites:
docker & docker-compose
NVIDIA Container Toolkit & NVIDIA Driver
To enable GPU support for a container you need to create the container with NVIDIA Container Toolkit. There are two ways you can do that:
You can configure Docker to always use nvidia container runtime. It is fine to do so as it works just as the default runtime unless some NVIDIA-specific environment variables are present (more on that later). This is done by placing "default-runtime": "nvidia" into Docker's daemon.json:
/etc/docker/daemon.json:
{
"runtimes": {
"nvidia": {
"path": "/usr/bin/nvidia-container-runtime",
"runtimeArgs": []
}
},
"default-runtime": "nvidia"
}
You can select the runtime during container creation. With docker-compose it is only possible with format version 2.3.
Here is a sample docker-compose.yml to launch Tensorflow with GPU:
version: "2.3" # the only version where 'runtime' option is supported
services:
test:
image: tensorflow/tensorflow:2.3.0-gpu
# Make Docker create the container with NVIDIA Container Toolkit
# You don't need it if you set 'nvidia' as the default runtime in
# daemon.json.
runtime: nvidia
# the lines below are here just to test that TF can see GPUs
entrypoint:
- /usr/local/bin/python
- -c
command:
- "import tensorflow as tf; tf.test.is_gpu_available(cuda_only=False, min_cuda_compute_capability=None)"
By running this with docker-compose up you should see a line with the GPU specs in it. It appears at the end and looks like this:
test_1 | 2021-01-23 11:02:46.500189: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1402] Created TensorFlow device (/device:GPU:0 with 1624 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1050, pci bus id: 0000:01:00.0, compute capability: 6.1)
And that is all you need to launch an official Tensorflow image with GPU.
NVIDIA Environment Variables and custom images
As I mentioned, NVIDIA Container Toolkit works as the default runtime unless some variables are present. These are listed and explained here. You only need to care about them if you build a custom image and want to enable GPU support in it. Official Tensorflow images with GPU have them inherited from CUDA images they use a base, so you only need to start the image with the right runtime as in the example above.
If you are interested in customising a Tensorflow image, I wrote another post on that.
Host Configuration for NVIDIA driver in container
As mentioned in the beginning, this is not something you want on a workstation. The process require you to start the driver container when no other display driver is loaded (that is via SSH, for example). Furthermore, at the moment of writing only Ubuntu 16.04, Ubuntu 18.04 and Centos 7 were supported.
There is an official guide and below are extractions from it for Ubuntu 18.04.
Edit 'root' option in NVIDIA Container Toolkit settings:
sudo sed -i 's/^#root/root/' /etc/nvidia-container-runtime/config.toml
Disable the Nouveau driver modules:
sudo tee /etc/modules-load.d/ipmi.conf <<< "ipmi_msghandler" \
&& sudo tee /etc/modprobe.d/blacklist-nouveau.conf <<< "blacklist nouveau" \
&& sudo tee -a /etc/modprobe.d/blacklist-nouveau.conf <<< "options nouveau modeset=0"
If you are using an AWS kernel, ensure that the i2c_core kernel module is enabled:
sudo tee /etc/modules-load.d/ipmi.conf <<< "i2c_core"
Update the initramfs:
sudo update-initramfs -u
Now it's time to reboot for the changes to take place. After reboot check that no nouveau or nvidia modules are loaded. The commands below should return nothing:
lsmod | grep nouveau
lsmod | grep nvidia
Starting driver in container
The guide offers a command to run the driver, I prefer docker-compose. Save the following as driver.yml:
version: "3.0"
services:
driver:
image: nvidia/driver:450.80.02-ubuntu18.04
privileged: true
restart: unless-stopped
volumes:
- /run/nvidia:/run/nvidia:shared
- /var/log:/var/log
pid: "host"
container_name: nvidia-driver
Use docker-compose -f driver.yml up -d to start the driver container. It will take a couple of minutes to compile modules for your kernel. You may use docker logs nvidia-driver -f to overview the process, wait for 'Done, now waiting for signal' line to appear. Otherwise use lsmod | grep nvidia to see if the driver modules are loaded. When it's ready you should see something like this:
nvidia_modeset 1183744 0
nvidia_uvm 970752 0
nvidia 19722240 17 nvidia_uvm,nvidia_modeset

Docker Compose v1.27.0+
since 2022 version 3.x
version: "3.6"
services:
jupyter-8888:
image: "tensorflow/tensorflow:latest-gpu-jupyter"
env_file: "env-file"
deploy:
resources:
reservations:
devices:
- driver: "nvidia"
device_ids: ["0"]
capabilities: [gpu]
ports:
- 8880:8888
volumes:
- workspace:/workspace
- data:/data
if you want to specify different GPU id eg. 0 and 3
device_ids: ['0', '3']

Managed to get it working by installing WSL2 on my windows machine to to use VS Code along with the Remote-Containers extension. Here is a collection of articles that helped a lot with the installation of WSL2 and using VS Code from within it:
https://learn.microsoft.com/en-us/windows/wsl/install-win10
ubuntu.com/blog/getting-started-with-cuda-on-ubuntu-on-wsl-2
https://code.visualstudio.com/docs/remote/containers
With the remote-containers extension of VS Code, you can then setup you devcontainer based on a docker-compose file (or just a Dockerfile as I did), which is probably better explained in the third link above. One thing for myself to remember is that when defining the .devcontainer.json file you need to make sure to set
// Optional arguments passed to ``docker run ... ``
"runArgs": [
"--gpus", "all"
]
Before VS Code, I have used Pycharm for a long time, so switching to VS Code was quite a pain at first, but VS Code along with WSL2, the remote-containers, and the pylance extension have made it quite easy to develop in a container with GPU support. As far as I know Pycharcm doesnt support debugging inside a container in WSL atm, because of
https://intellij-support.jetbrains.com/hc/en-us/community/posts/360009752059-Using-docker-compose-interpreter-on-wsl-project-Windows-
https://youtrack.jetbrains.com/issue/WI-53325

Related

docker compose: use GPU if available, else start container without one

I'm using docker compose to run a container:
version: "3.9"
services:
app:
image: nvidia/cuda:11.0.3-base-ubuntu20.04
deploy:
resources:
reservations:
devices:
- capabilities: [ gpu ]
The container can benefit from the presence of a GPU, but it does not strictly need one. Using the above docker-compose.yaml results in an error
Error response from daemon: could not select device driver "" with capabilities: [[gpu]]
when being used on a machine without a GPU. Is it possible to specify "use a GPU, if one is available, else start the container without one"?
#herku, there are no conditional statements in docker compose. In 2018 the feature was out of scope https://github.com/docker/compose/issues/5756
Anyway you can check this answer with options how to workaround the problem
https://stackoverflow.com/a/50393225/3730077

Allocating more memory to Docker Enterprise Preview Edition

I am running Docker Enterprise Preview Edition on Windows Server 2019 and have managed to pull and run the docker-compose.yml file below. However shortly afterwards the container shuts down and when I run the command docker-compose logs it shows me the insufficient memory issue below:
Docker-compose file
version: '3.7'
services:
elasticsearch:
container_name: elasticsearch
# image: docker.elastic.co/elasticsearch/elasticsearch:7.9.2
image: docker.elastic.co/elasticsearch/elasticsearch:7.17.1
deploy:
resources:
limits:
cpus: 0.25
memory: 4096m
ports:
- 9200:9200
volumes:
- C:\DockerContainers\Elasticsearch\data:/usr/share/elasticsearch/data
- C:\DockerContainers\Elasticsearch\config\certs:/usr/share/elasticsearch/config/certs
environment:
- xpack.monitoring.enabled=true
- xpack.watcher.enabled=true
- ES_JAVA_OPTS=-Xms512m -Xmx512m
- discovery.type=single-node
# networks:
# - elastic
kibana:
container_name: kibana
# image: docker.elastic.co/kibana/kibana:7.9.2
image: docker.elastic.co/kibana/kibana:7.17.1
deploy:
resources:
limits:
cpus: 0.25
memory: 4096m
ports:
- 5601:5601
volumes:
- C:\DockerContainers\Elasticsearch\Kibana\config\certs:/usr/share/kibana/config/certs
depends_on:
- elasticsearch
# networks:
# - elastic
# networks:
# elastic:
# driver: nat
Docker logs
elasticsearch | # There is insufficient memory for the Java Runtime Environment to continue.
elasticsearch | # Native memory allocation (mmap) failed to map 65536 bytes for committing reserved memory.
elasticsearch | # An error report file with more information is saved as:
elasticsearch | # logs/hs_err_pid7.log
I read on the elasticsearch Docker guideline that it needs at least 4GB RAM. I have included the RAM limit in the docker compose yml file but it doesn't seem to take effect. Does anyone know how to set the memory usage for Docker which is running on Windows Server 2019?
I ran into this same issue trying to start Docker using almost exactly the same configuration as you, but doing so from Windows 10 instead of Windows Server 2019. I suspect the issue isn't the memory configuration for the Elastic containers, but rather the Docker host itself needs to be tweaked.
I'm not sure how to go about increasing memory for the Docker host when running Docker Enterprise Preview Edition, but for something like Docker Desktop, this can be done by changing the memory limit afforded to it by adjusting the Settings -> Advanced -> Memory slider. Docker Desktop may need to be restarted afterwards for this change to take effect.
Similarly, if you are running a Docker machine, like I am, it may be necessary to recreate it, but with more than the default 1GB that is allotted to new Docker machine instances. Something like this:
docker-machine create -d vmwareworkstation --vmwareworkstation-memory-size 8192 default
Swap in whatever vm type makes sense for you (e.g., VirtualBox) and remember that you need to run docker-machine env | Invoke-Expression in each console window in which you intend to run Docker commands.
I saw definite improvement after giving the Docker host more breathing room, as the memory-related errors disappeared. However, the containers still failed to start due to the following error:
max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
This is a known issue (see https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html#_set_vm_max_map_count_to_at_least_262144).
Again, if you're using Docker Desktop with WSL2 installed, you can run this:
wsl -d docker-desktop sysctl -w vm.max_map_count=262144
sysctl -w vm.max_map_count=262144
For a Docker machine, you'll need to ssh into it and set the vm.max_map_count directly:
docker-machine env | Invoke-Expression
docker-machine ssh default
sudo sysctl -w vm.max_map_count=262144
The above change to the Docker machine will last during your current session but will be lost after reboot. You can make it stick by:
adding sysctl -w vm.max_map_count=262144 to the /var/lib/boot2docker/bootlocal.sh file. Note this file may not exist before your changes.
running chmod +x /var/lib/boot2docker/bootlocal.sh
exit ssh and restart the Docker machine via docker-machine restart default.
to confirm the change, run docker-machine ssh default sudo sysctl vm.max_map_count. You should see it set to 262144.
After these changes, I was able to bring the elastic containers up. You can smoke test this by issuing a GET request for http://localhost:9200 in either Postman or curl. If you're using Docker machine, the ports you've set up are accessible to the machine, but not to your Windows box. To be able to connect to port 9200, you'll need to set up port forwarding via the docker-machine ssh -f -N -L 9200:localhost:9200 command.
Also, consider adding the following to your Docker compose file:
environment:
- bootstrap.memory_lock=true
ulimits:
memlock:
soft: -1
hard: -1
Hope this helps!
This is a late response but still if it's useful for anyone out there.
In fact in windows server, Docker desktop is the best option for running Docker. If you use Docker enterprise edition in windows server 2019, it has some restrictions in memory (RAM) allocation to container if you have not purchased the license keys to operate. Max to max it will allocate 9.7Mb of memory to container. You can confirm this by using below command
Docker stats <container id>
Here you cannot see the MEM_USAGE/LIMIT column at all in the output like Docker desktop.
So when your container request memory more than 9.7Mb, it will go down. Also you cannot use the deploy option in compose file to reserve memory for specific container in case of enterprise edition in windows server 2019. It will throw error 'memory reservation not supported'. However mem_limit will accept by the compose file but again it will not allocate the mentioned memory.
Note - The advantage of Docker enterprise edition is that it will always run in the background even if the user log off the server. But for Docker desktop, it will stop running upon user log off.

Installing elasticsearch 7 on raspberry pi 3

I am trying to install the latest Elasticsearch on my Raspberry Pi 3 by following the installation tutorial, however I found absolutely
Some info about my system:
$ sudo apt-get update
$ sudo apt-get upgrade
$ cat /etc/os-release
PRETTY_NAME="Raspbian GNU/Linux 9 (stretch)"
NAME="Raspbian GNU/Linux"
VERSION_ID="9"
VERSION="9 (stretch)"
ID=raspbian
ID_LIKE=debian
$ java -version
openjdk version "9-Raspbian"
OpenJDK Runtime Environment (build 9-Raspbian+0-9b181-4bpo9rpt1)
OpenJDK Server VM (build 9-Raspbian+0-9b181-4bpo9rpt1, mixed mode)
// I tried also with openjdk-java-8
What I've tried
install via sudo-apt
$ sudo apt-get install elasticsearch
....
Preparing to unpack .../elasticsearch_1.7.5-1_all.deb .
....
$ ./usr/share/elasticsearch/bin/elasticsearch
xception in thread "main" java.lang.NoClassDefFoundError: org/elasticsearch/common/jackson/dataformat/yaml/snakeyaml/error/YAMLException
at org.elasticsearch.common.jackson.dataformat.yaml.YAMLFactory._createParser(YAMLFactory.java:426)
at org.elasticsearch.common.jackson.dataformat.yaml.YAMLFactory.createParser(YAMLFactory.java:327)
at org.elasticsearch.common.xcontent.yaml.YamlXContent.createParser(YamlXContent.java:90)
at org.elasticsearch.common.settings.loader.XContentSettingsLoader.load(XContentSettingsLoader.java:45)
at org.elasticsearch.common.settings.loader.YamlSettingsLoader.load(YamlSettingsLoader.java:46)
at org.elasticsearch.common.settings.ImmutableSettings$Builder.loadFromStream(ImmutableSettings.java:982)
at org.elasticsearch.common.settings.ImmutableSettings$Builder.loadFromUrl(ImmutableSettings.java:969)
at org.elasticsearch.node.internal.InternalSettingsPreparer.prepareSettings(InternalSettingsPreparer.java:110)
at org.elasticsearch.bootstrap.Bootstrap.initialSettings(Bootstrap.java:144)
at org.elasticsearch.bootstrap.Bootstrap.main(Bootstrap.java:215)
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:32)
Caused by: java.lang.ClassNotFoundException: org.elasticsearch.common.jackson.dataformat.yaml.snakeyaml.error.YAMLException
at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:582)
at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:185)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:496)
... 11 more
got to elasticsearch downloads and get the tar-file
/elasticsearch-7.1.1/bin/elasticsearch
./elasticsearch-7.1.1/bin/elasticsearch-env: line 69: /home/pi/elasticsearch-7.1.1/jdk/bin/java: cannot execute binary file: Exec format error
docker path
$ docker --version
Docker version 18.04.0-ce, build 3d479c0
$ docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
docker.elastic.co/elasticsearch/elasticsearch 7.1.1 b0e9f9f047e6 4 weeks ago 894MB
$ docker run -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" docker.elastic.co/elasticsearch/elasticsearch:7.1.1
standard_init_linux.go:190: exec user process caused "exec format error"
Has anyone managed to install Elasticsearch 7 on Raspberry Pi 3? Is there any way to go around the issues listed above?
Unfortunately, unlike all previous releases, the deb package for ElasticSearch 7 is only packaged for Intel architectures. I believe the dependencies are the JVM and the machine learning module, which can be turned off, but it would have to be repackaged or installed by hand from the files in the deb package. (If I don't get round to doing it, I'm, sure someone else will eventually).
Unless you particularly need ES7 features, the easiest thing would be to install the last version 6, which will install on Raspbian. It's here: https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.8.0.deb
You will want to change the default memory used from 1G to 512M in /etc/elasticsearch/jvm.options and turn off machine learning in /etc/elasticsearch/elasticsearch.yml (xpack.ml.enabled: false).
While it will run with openjre, the default Java run time on Raspbian, it runs about 30 times slower than on an equivalent Intel. I've never got to the bottom of why, but it is fine if you install the Oracle JRE instead:
apt-get install oracle-java8-jdk
Note the Raspbian/Debian repo (as in apt-get install) is version 1 not v7 - ancient, avoid it.
In extensive use of ES6 (and its predecessors) on Raspberry Pi, I have not found anything to work differently from Intel, despite their statement that they don't support anything other than Intel.
However, RPi struggles to run the whole ELK (Elasticsearch, Logstatsh, Kibana) stack (I did try that): it really doesn't have enough memory. The RPi 4 with 4GB might do better, I haven't tried, or distributed across three separate Pis. I did get ELK 5 to run but it exhausted memory after a few day's use, and I couldn't get ELK 6 to run at all.
On Raspbian-9 after a test to install elasticsearch-7 and purge it to install elasticsearch-6, in addition to what is said above, I had to define JAVA_HOME in /etc/default/elasticsearch :
# Elasticsearch Java path
JAVA_HOME=/usr/lib/jvm/java-8-openjdk-armhf
The user was not the right one for two folders, to fix it :
sudo chown -R elasticsearch:elasticsearch /etc/elasticsearch
sudo chown -R elasticsearch:elasticsearch /var/lib/elasticsearch
In our case at charik.org, we are running a cluster of RPI4 with Ubuntu server 19.10 which is the only OS fully supporting Arm64v8 architecture on Raspberry Pi.
The decision to use this OS was not easy, coz it consumes more memory than a lightweight raspbian, but the ease of use can fully explain our decision.
We build Elasticsearch v7.5.1 image for Arm64v8 architecture from ES package with no jdk embedded found here: elasticsearch-no-jdk.
To checkout our Docker Hub repo for the build image: charik/elasticsearch:latest
ElasticSearch embed is own java binaries into the folder jdk.
You can define JAVA_HOME from your own system with :
JAVA_HOME=/usr ./bin/elasticsearch
Then you will be unsupported, but, can use elasticsearch on arm...
In order to be able to use easily Elasticsearch, Kibana and Elastalert on raspberrypi we made available on dockerhub, those 3 docker images:
Elasticsearch: https://hub.docker.com/r/comworkio/elasticsearch
Kibana: https://hub.docker.com/r/comworkio/kibana
Elastalert: https://hub.docker.com/r/comworkio/elastalert
Here's the git repository containing the Dockerfiles and documentations: https://gitlab.comwork.io/oss/elasticstack/elasticstack-arm (the docker images are built on raspberrypi using as gitlab runners then pushed on the dockerhub).
We'll keep them up to date with the right tags until elastic will take care of it (I think they will provide arm-based images some days, after discussing this matter with them).
Here's an example of docker-compose file you can use on a single raspberrypi:
version: "3.3"
services:
es01:
image: comworkio/elasticsearch:7.9.1-1.8-arm
container_name: es01
ports:
- 9200:9200
- 9300:9300
networks:
- covid19
volumes:
- data01:/usr/share/elasticsearch/data
kib01:
image: comworkio/kibana:7.9.1-1.9-arm
container_name: kib01
ports:
- 5601:5601
environment:
- ES_PROTO=http
- ES_HOST=es01
- ES_PORT=9200
networks:
- covid19
depends_on:
- es01
volumes:
data01:
driver: local
networks:
covid19:
driver: bridge
Then here you go with:
docker-compose up -d
Then your kibana is accessible on http://{your_raspberrypi}:5601 and your elasticsearch api on http://{your_raspberrypi}:9200. This is working pretty fine with a raspberrypi 4 model B 8gb ram. If you don't have this model but an older one, you can use two of them with at least 2gb for your elastic node and 2gb for your kibana node. I also advise you tu use a model 4 in order to be able to boot on a SSD drive instead of an SD flash.
For the french speakers, here's a demo using those images: https://youtu.be/BC1iSnoe15k
And the repository of the project with some documentations (in english): https://gitlab.comwork.io/oss/covid19
Article helps you to install #elasticsearch on #raspberrypi.
normally its difficult to install Elasticsearch on raspberry with its own jdk which is not support for armf platform. So in this article we run Elasticsearch with the help of no_jdk bundle of Elasticsearch and provide jdk(java Home ) of raspberry pi.
if there is any difficulty be free to ask
#raspberrypi #elastic #elasticsearch
Click here for article

use nvidia-docker from docker-compose

I would like to run 2 docker images with docker-compose.
one image should run with nvidia-docker and the other with docker.
I've seen this post use nvidia-docker-compose launch a container, but exited soon
but this is not working for me(not even running only one image)...
any idea would be great.
UPDATE : please check nvidia-docker 2 and its support of docker-compose first
https://github.com/NVIDIA/nvidia-docker/wiki/Frequently-Asked-Questions#do-you-support-docker-compose
(I'd first suggest adding the nvidia-docker tag).
If you look at the nvidia-docker-compose code here it only generates a specific docker-file for docker-compose after a query of the nvidia configuration on localhost:3476.
You can also make by hand this docker-compose file as they turn out to be quite simple, follow this example, replace 375.66 with your nvidia driver version and put as many /dev/nvidia[n] lines as you have graphic cards (did not try to put services on separate GPUs but go for it !):
services:
exampleservice0:
devices:
- /dev/nvidia0
- /dev/nvidia1
- /dev/nvidiactl
- /dev/nvidia-uvm
- /dev/nvidia-uvm-tools
environment:
- EXAMPLE_ENV_VARIABLE=example
image: company/image
volumes:
- ./disk:/disk
- nvidia_driver_375.66:/usr/local/nvidia:ro
version: '2'
volumes:
media: null
nvidia_driver_375.66:
external: true
Then just run this hand-made docker-compose file with a classic docker-compose command.
Maybe you can then compose with non nvidia dockers by skipping the nvidia specific stuff in the other services.
Additionally to the accepted answer, here's my approach, a bit shorter.
I needed to use the old version of docker-compose (2.3) because of the required runtime: nvidia (won't necessarily work with version: 3 - see this). Setting NVIDIA_VISIBLE_DEVICES=all will make all the GPUs visible.
version: '2.3'
services:
your-service-name:
runtime: nvidia
environment:
- NVIDIA_VISIBLE_DEVICES=all
# ...your stuff
My example is available here.
Tested on NVIDIA Docker 2.5.0, Docker CE 19.03.13 and NVIDIA-SMI 418.152.00 and CUDA 10.1 on Debian 10.

Docker mac symfony 3 very slow

I'm starting a new project with Symfony 3 and I want to use Docker for the development environment. We will work on this project with a dozen developers so I want to have an easy install process.
Here's my docker-compose.yml
version: '2'
services:
db:
image: mysql
ports:
- "3307:3306"
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: mydb
MYSQL_USER: root
MYSQL_PASSWORD: root
php:
build: ./php-fpm
expose:
- "9001"
volumes:
- .:/var/www/project
- ./var/logs:/var/www/project/app/logs
links:
- db
nginx:
build: ./nginx
ports:
- "8001:80"
links:
- php
volumes_from:
- php
volumes:
- ./var/logs/nginx/:/var/log/nginx
I installed the recent Docker for Mac application (beta). The big issue is that my symfony app is very very slow (a simple page takes more than 5 seconds). The same app with MAMP is much faster (500ms max). Is this a know issue of Docker ? How can I debug it ?
This is a known issue. Your local file system is being mounted in the Docker for Mac linux VM with osxfs, there is some additional latency when reading and writing these mounted files. For small applications this isn't too noticeable, but for larger applications that could read thousands of files on a single request it is can slow things down significantly.
Sorry for the late answer but you could install Docker CE Edge, because it supports cache mode.
Download Docker-Edge (waiting for the stable version of docker that will support cached mode)
Add the following line to your docker-compose.yml file
Blockquote
php:
volumes:
- ${SYMFONY_APP_PATH}:/var/www/symfony:cached
Replace ${SYMFONY_APP_PATH} by your own path.
Actually I'm using docker to run projects locally. To run Docker faster I used the below setup:
MAC OSX:
Docker Toolbox
Install normaly the dmg file.
Open your terminal and type:
`$ docker-machine create --driver virtualbox default `
`$ docker-machine env default`
`eval "$(docker-machine env default)"`
Now you have the docker-machine up and running, any docker-compose, docker command will run "inside the machine".
In our case "Symfony" is a large application. The docker-machine file system is under osxfs, so the application will be very slow.
docker-machine-nfs
Install with:
curl -s https://raw.githubusercontent.com/adlogix/docker-machine-nfs/master/docker-machine-nfs.sh | sudo tee /usr/local/bin/docker-machine-nfs > /dev/null && \ sudo chmod +x /usr/local/bin/docker-machine-nfs
Running
It will be necessary to type the root password
$ docker-machine-nfs default
Now your docker-machine is running under the nfs file system.
The speed will be regular.
Mapping your docker-machine to localhost
Regulary the docker container will run under 192.168.99.100:9000
Running on terminal:
$ vboxmanage modifyvm default --natpf1 "default-map,tcp,,9000,,9000'
You can access from localhost:9000
It's possible to get performance with Docker for Mac almost as fast as native shared volumes with Linux by using Mutagen. A benchmark is available here.
I created a full example for a Symfony project, it can be used for any type of project in any language.
I had a similar problem. In my case I was running a python script within a docker container and it was really slow. The way I solved this is using the "old" docker-toolbox.
It's not ideal, but worked for me
I have a detailed solution to this problem in my answer here, docker on OSX slow volumes, please check it out.
I got it where there is no slow downs and no extra software to install.
Known issue
This is known issue https://forums.docker.com/t/file-access-in-mounted-volumes-extremely-slow-cpu-bound/8076.
I won't recommend https://www.docker.com/products/docker-toolbox if you have https://www.docker.com/docker-mac.
Docker for Mac does not use VirtualBox, but rather HyperKit, a
lightweight macOS virtualization solution built on top of
Hypervisor.framework in macOS 10.10 Yosemite and higher.
https://docs.docker.com/docker-for-mac/docker-toolbox/#the-docker-for-mac-environment
My workaround
I have created workaround which may help you. I use http://docker-sync.io/ for my symfony project. Before using docker-sync page was loading 30 sec, now it's below 1 sec - https://github.com/Arkowsky/docker_symfony

Resources