I am trying to install the latest Elasticsearch on my Raspberry Pi 3 by following the installation tutorial, however I found absolutely
Some info about my system:
$ sudo apt-get update
$ sudo apt-get upgrade
$ cat /etc/os-release
PRETTY_NAME="Raspbian GNU/Linux 9 (stretch)"
NAME="Raspbian GNU/Linux"
VERSION_ID="9"
VERSION="9 (stretch)"
ID=raspbian
ID_LIKE=debian
$ java -version
openjdk version "9-Raspbian"
OpenJDK Runtime Environment (build 9-Raspbian+0-9b181-4bpo9rpt1)
OpenJDK Server VM (build 9-Raspbian+0-9b181-4bpo9rpt1, mixed mode)
// I tried also with openjdk-java-8
What I've tried
install via sudo-apt
$ sudo apt-get install elasticsearch
....
Preparing to unpack .../elasticsearch_1.7.5-1_all.deb .
....
$ ./usr/share/elasticsearch/bin/elasticsearch
xception in thread "main" java.lang.NoClassDefFoundError: org/elasticsearch/common/jackson/dataformat/yaml/snakeyaml/error/YAMLException
at org.elasticsearch.common.jackson.dataformat.yaml.YAMLFactory._createParser(YAMLFactory.java:426)
at org.elasticsearch.common.jackson.dataformat.yaml.YAMLFactory.createParser(YAMLFactory.java:327)
at org.elasticsearch.common.xcontent.yaml.YamlXContent.createParser(YamlXContent.java:90)
at org.elasticsearch.common.settings.loader.XContentSettingsLoader.load(XContentSettingsLoader.java:45)
at org.elasticsearch.common.settings.loader.YamlSettingsLoader.load(YamlSettingsLoader.java:46)
at org.elasticsearch.common.settings.ImmutableSettings$Builder.loadFromStream(ImmutableSettings.java:982)
at org.elasticsearch.common.settings.ImmutableSettings$Builder.loadFromUrl(ImmutableSettings.java:969)
at org.elasticsearch.node.internal.InternalSettingsPreparer.prepareSettings(InternalSettingsPreparer.java:110)
at org.elasticsearch.bootstrap.Bootstrap.initialSettings(Bootstrap.java:144)
at org.elasticsearch.bootstrap.Bootstrap.main(Bootstrap.java:215)
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:32)
Caused by: java.lang.ClassNotFoundException: org.elasticsearch.common.jackson.dataformat.yaml.snakeyaml.error.YAMLException
at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:582)
at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:185)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:496)
... 11 more
got to elasticsearch downloads and get the tar-file
/elasticsearch-7.1.1/bin/elasticsearch
./elasticsearch-7.1.1/bin/elasticsearch-env: line 69: /home/pi/elasticsearch-7.1.1/jdk/bin/java: cannot execute binary file: Exec format error
docker path
$ docker --version
Docker version 18.04.0-ce, build 3d479c0
$ docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
docker.elastic.co/elasticsearch/elasticsearch 7.1.1 b0e9f9f047e6 4 weeks ago 894MB
$ docker run -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" docker.elastic.co/elasticsearch/elasticsearch:7.1.1
standard_init_linux.go:190: exec user process caused "exec format error"
Has anyone managed to install Elasticsearch 7 on Raspberry Pi 3? Is there any way to go around the issues listed above?
Unfortunately, unlike all previous releases, the deb package for ElasticSearch 7 is only packaged for Intel architectures. I believe the dependencies are the JVM and the machine learning module, which can be turned off, but it would have to be repackaged or installed by hand from the files in the deb package. (If I don't get round to doing it, I'm, sure someone else will eventually).
Unless you particularly need ES7 features, the easiest thing would be to install the last version 6, which will install on Raspbian. It's here: https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.8.0.deb
You will want to change the default memory used from 1G to 512M in /etc/elasticsearch/jvm.options and turn off machine learning in /etc/elasticsearch/elasticsearch.yml (xpack.ml.enabled: false).
While it will run with openjre, the default Java run time on Raspbian, it runs about 30 times slower than on an equivalent Intel. I've never got to the bottom of why, but it is fine if you install the Oracle JRE instead:
apt-get install oracle-java8-jdk
Note the Raspbian/Debian repo (as in apt-get install) is version 1 not v7 - ancient, avoid it.
In extensive use of ES6 (and its predecessors) on Raspberry Pi, I have not found anything to work differently from Intel, despite their statement that they don't support anything other than Intel.
However, RPi struggles to run the whole ELK (Elasticsearch, Logstatsh, Kibana) stack (I did try that): it really doesn't have enough memory. The RPi 4 with 4GB might do better, I haven't tried, or distributed across three separate Pis. I did get ELK 5 to run but it exhausted memory after a few day's use, and I couldn't get ELK 6 to run at all.
On Raspbian-9 after a test to install elasticsearch-7 and purge it to install elasticsearch-6, in addition to what is said above, I had to define JAVA_HOME in /etc/default/elasticsearch :
# Elasticsearch Java path
JAVA_HOME=/usr/lib/jvm/java-8-openjdk-armhf
The user was not the right one for two folders, to fix it :
sudo chown -R elasticsearch:elasticsearch /etc/elasticsearch
sudo chown -R elasticsearch:elasticsearch /var/lib/elasticsearch
In our case at charik.org, we are running a cluster of RPI4 with Ubuntu server 19.10 which is the only OS fully supporting Arm64v8 architecture on Raspberry Pi.
The decision to use this OS was not easy, coz it consumes more memory than a lightweight raspbian, but the ease of use can fully explain our decision.
We build Elasticsearch v7.5.1 image for Arm64v8 architecture from ES package with no jdk embedded found here: elasticsearch-no-jdk.
To checkout our Docker Hub repo for the build image: charik/elasticsearch:latest
ElasticSearch embed is own java binaries into the folder jdk.
You can define JAVA_HOME from your own system with :
JAVA_HOME=/usr ./bin/elasticsearch
Then you will be unsupported, but, can use elasticsearch on arm...
In order to be able to use easily Elasticsearch, Kibana and Elastalert on raspberrypi we made available on dockerhub, those 3 docker images:
Elasticsearch: https://hub.docker.com/r/comworkio/elasticsearch
Kibana: https://hub.docker.com/r/comworkio/kibana
Elastalert: https://hub.docker.com/r/comworkio/elastalert
Here's the git repository containing the Dockerfiles and documentations: https://gitlab.comwork.io/oss/elasticstack/elasticstack-arm (the docker images are built on raspberrypi using as gitlab runners then pushed on the dockerhub).
We'll keep them up to date with the right tags until elastic will take care of it (I think they will provide arm-based images some days, after discussing this matter with them).
Here's an example of docker-compose file you can use on a single raspberrypi:
version: "3.3"
services:
es01:
image: comworkio/elasticsearch:7.9.1-1.8-arm
container_name: es01
ports:
- 9200:9200
- 9300:9300
networks:
- covid19
volumes:
- data01:/usr/share/elasticsearch/data
kib01:
image: comworkio/kibana:7.9.1-1.9-arm
container_name: kib01
ports:
- 5601:5601
environment:
- ES_PROTO=http
- ES_HOST=es01
- ES_PORT=9200
networks:
- covid19
depends_on:
- es01
volumes:
data01:
driver: local
networks:
covid19:
driver: bridge
Then here you go with:
docker-compose up -d
Then your kibana is accessible on http://{your_raspberrypi}:5601 and your elasticsearch api on http://{your_raspberrypi}:9200. This is working pretty fine with a raspberrypi 4 model B 8gb ram. If you don't have this model but an older one, you can use two of them with at least 2gb for your elastic node and 2gb for your kibana node. I also advise you tu use a model 4 in order to be able to boot on a SSD drive instead of an SD flash.
For the french speakers, here's a demo using those images: https://youtu.be/BC1iSnoe15k
And the repository of the project with some documentations (in english): https://gitlab.comwork.io/oss/covid19
Article helps you to install #elasticsearch on #raspberrypi.
normally its difficult to install Elasticsearch on raspberry with its own jdk which is not support for armf platform. So in this article we run Elasticsearch with the help of no_jdk bundle of Elasticsearch and provide jdk(java Home ) of raspberry pi.
if there is any difficulty be free to ask
#raspberrypi #elastic #elasticsearch
Click here for article
Related
I am getting this error when docker-compose up on one of the containers only.
exec: "com.docker.cli": executable file not found in $PATH
The terminal process "/bin/zsh '-c', 'docker logs -f f6557b5dd19d9b2bc5a63a840464bc2b879d375fe72bc037d82a5358d4913119'" failed to launch (exit code: 1).
I uninstalled and reinstalled docker desktop#2.3.0.5 on Mac
docker-compose build from scratch
other containers are running
I get the above error.
It used to be running. I am not sure why this is happening. I know that I upgraded docker from I think 2.3
also I think I received an update on my mac
Dockerfile
FROM tiangolo/uvicorn-gunicorn:python3.8
COPY requirements.txt /app/
RUN pip install -r requirements.txt
COPY ./app /app/app
#COPY config.py /app/app/
docker-compose.yml
version: "3"
services:
postgresql:
container_name: postgresql
image: postgres:12
ports:
- "5433:5432"
environment:
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- POSTGRES_DB=${POSTGRES_DB}
volumes:
- ./postgres-data:/var/lib/postgresql/data
fastapi:
build:
context: ./fastapi/
dockerfile: Dockerfile
volumes:
- ./fastapi/app/imgs:/app/app/imgs
ports:
- "1001:80"
depends_on:
- postgresql
env_file:
- .env
pgadmin:
container_name: pgadmin
image: dpage/pgadmin4
environment:
- PGADMIN_DEFAULT_EMAIL=pgadmin4#pgadmin.org
- PGADMIN_DEFAULT_PASSWORD=admin
ports:
- "5050:80"
depends_on:
- postgresql
solr:
build:
context: ./solr/
dockerfile: Dockerfile
restart: always
ports:
- "8983:8983"
volumes:
- data:/var/solr
volumes:
data:
update:
It worked when I downgraded to docker desktop 2.3.0.4
Updated Answer:
Since VSCode Docker 1.14.0 you can now set the Docker executable path in the settings, which should help in most cases.
Old Answer (Option was removed from Docker Desktop):
The Desktop Docker Version 2.4.0.0 is working for me after I did deactivate the feature Enable cloud experience. You can find it under Preferences --> Command Line.
If you are still experience the problem, you may try a clean remove and install of Docker and also make sure that Docker is actually running, see other possible solution(s) here.
History of GitHub Issues:
https://github.com/docker/for-mac/issues/4956
https://github.com/microsoft/vscode-docker/issues/2366
https://github.com/microsoft/vscode-docker/issues/2578
https://github.com/microsoft/vscode-docker/issues/2894
Status (2021-06-22): VSCode Version 1.57.0 seems to have fixed the issue again.
You might get the following error message simply because you did not start Docker just yet
exec: "com.docker.cli": executable file not found in $PATH
In my case the problem was I had installed and then crudely removed the docker compose cli. This resulted in the above error to start popping up.
I got the compose CLI back using instructions from https://docs.docker.com/cloud/ecs-integration/#install-the-docker-compose-cli-on-linux and running (as root):
curl -L https://raw.githubusercontent.com/docker/compose-cli/main/scripts/install/install_linux.sh | sh
This fixed it for me.
Note: I would not recommend installing docker-compose cli to fix this issue, but to share some insights in case this is applicable to you as well.
Ensure that docker CLI is installed not just docker desktop on Linux. YOu can install it using:
sudo apt install docker.io
Update: The "cloud experience" no longer exists even as an experimental feature in Docker Desktop v3.2.1. This should no longer be an issue.
If you continue to see this problem on a newer version, you will need to downgrade to Docker v3.1.0, disable the cloud experience feature, then upgrade to the newest version.
Had the exact same issue. Was fixed after starting the upgraded docker first, then running this command.
dostarr#DOSTARR-M-38LF ~ % docker run busybox
exec: "com.docker.cli": executable file not found in $PATH
<started docker>
dostarr#DOSTARR-M-38LF ~ % docker run busybox
dostarr#DOSTARR-M-38LF ~ %
I had the same problem when trying to run minikube tunnel, and since I didn't want to re-install anything, I ended up running it from the docker bin path (on Windows it's in 'C:\Program Files\Docker\Docker\resources\bin') and it worked.
An alternative to Docker Desktop is colima, container runtimes on macOS (and Linux) with minimal setup.
# Homebrew
brew install colima docker
colima start
Now, you can use the docker commands as before.
For docker compose commands, you have to install:
brew install docker-compose
if already have installed docker, it may not have started. So type in terminal,"docker run -d -p 80:80 docker/getting-started" and it should solve the issue.
I want to create some neural network in tensorflow 2.x that trains on a GPU and I want to set up all the necessary infrastructure inside a docker-compose network (assuming that this is actually possible for now). As far as I know, in order to train a tensorflow model on a GPU, I need the CUDA toolkit and the NVIDIA driver. To install these dependencies natively on my computer (OS: Ubuntu 18.04) is always quite a pain, as there are many version dependencies between tensorflow, CUDA and the NVIDIA driver. So, I was trying to find a way how to create a docker-compose file that contains a service for tensorflow, CUDA and the NVIDIA driver, but I am getting the following error:
# Start the services
sudo docker-compose -f docker-compose-test.yml up --build
Starting vw_image_cls_nvidia-driver_1 ... done
Starting vw_image_cls_nvidia-cuda_1 ... done
Recreating vw_image_cls_tensorflow_1 ... error
ERROR: for vw_image_cls_tensorflow_1 Cannot start service tensorflow: OCI runtime create failed: container_linux.go:346: starting container process caused "exec: \"import\": executable file not found in $PATH": unknown
ERROR: for tensorflow Cannot start service tensorflow: OCI runtime create failed: container_linux.go:346: starting container process caused "exec: \"import\": executable file not found in $PATH": unknown
ERROR: Encountered errors while bringing up the project.
My docker-compose file looks as follows:
# version 2.3 is required for NVIDIA runtime
version: '2.3'
services:
nvidia-driver:
# NVIDIA GPU driver used by the CUDA Toolkit
image: nvidia/driver:440.33.01-ubuntu18.04
environment:
- NVIDIA_VISIBLE_DEVICES=all
volumes:
# Do we need this volume to make the driver accessible by other containers in the network?
- nvidia_driver:/usr/local/nvidai/:ro # Taken from here: http://collabnix.com/deploying-application-in-the-gpu-accelerated-data-center-using-docker/
networks:
- net
nvidia-cuda:
depends_on:
- nvidia-driver
image: nvidia/cuda:10.1-base-ubuntu18.04
volumes:
# Do we need the driver volume here?
- nvidia_driver:/usr/local/nvidai/:ro # Taken from here: http://collabnix.com/deploying-application-in-the-gpu-accelerated-data-center-using-docker/
# Do we need to create an additional volume for this service to be accessible by the tensorflow service?
devices:
# Do we need to list the devices here, or only in the tensorflow service. Taken from here: http://collabnix.com/deploying-application-in-the-gpu-accelerated-data-center-using-docker/
- /dev/nvidiactl
- /dev/nvidia-uvm
- /dev/nvidia0
networks:
- net
tensorflow:
image: tensorflow/tensorflow:2.0.1-gpu # Does this ship with cuda10.0 installed or do I need a separate container for it?
runtime: nvidia
restart: always
privileged: true
depends_on:
- nvidia-cuda
environment:
- NVIDIA_VISIBLE_DEVICES=all
volumes:
# Volumes related to source code and config files
- ./src:/src
- ./configs:/configs
# Do we need the driver volume here?
- nvidia_driver:/usr/local/nvidai/:ro # Taken from here: http://collabnix.com/deploying-application-in-the-gpu-accelerated-data-center-using-docker/
# Do we need an additional volume from the nvidia-cuda service?
command: import tensorflow as tf; print(tf.reduce_sum(tf.random.normal([1000, 1000]))); print("SUCCESS")
devices:
# Devices listed here: http://collabnix.com/deploying-application-in-the-gpu-accelerated-data-center-using-docker/
- /dev/nvidiactl
- /dev/nvidia-uvm
- /dev/nvidia0
- /dev/nvidia-uvm-tools
networks:
- net
volumes:
nvidia_driver:
networks:
net:
driver: bridge
And my /etc/docker/daemon.json file looks as follows:
{"default-runtime":"nvidia",
"runtimes": {
"nvidia": {
"path": "/usr/bin/nvidia-container-runtime",
"runtimeArgs": []
}
}
}
So, it seems like the error is somehow related to configuring the nvidia runtime, but more importantly, I am almost certain that I didn't set up my docker-compose file correctly. So, my questions are:
Is it actually possible to do what I am trying to do?
If yes, did I setup my docker-compose file correctly (see comments in docker-compose.yml)?
How do I fix the error message I received above?
Thank you very much for your help, I highly appreciate it.
I agree that installing all tensorflow-gpu dependencies is rather painful. Fortunately, it's rather easy with Docker, as you only need NVIDIA Driver and NVIDIA Container Toolkit (a sort of a plugin). The rest (CUDA, cuDNN) Tensorflow images have inside, so you don't need them on the Docker host.
The driver can be deployed as a container too, but I do not recommend that for a workstation. It is meant to be used on servers where there is no GUI (X-server, etc). The subject of containerized driver is covered at the end of this post, for now let's see how to start tensorflow-gpu with docker-compose. The process is the same regardless of whether you have the driver in container or not.
How to launch Tensorflow-GPU with docker-compose
Prerequisites:
docker & docker-compose
NVIDIA Container Toolkit & NVIDIA Driver
To enable GPU support for a container you need to create the container with NVIDIA Container Toolkit. There are two ways you can do that:
You can configure Docker to always use nvidia container runtime. It is fine to do so as it works just as the default runtime unless some NVIDIA-specific environment variables are present (more on that later). This is done by placing "default-runtime": "nvidia" into Docker's daemon.json:
/etc/docker/daemon.json:
{
"runtimes": {
"nvidia": {
"path": "/usr/bin/nvidia-container-runtime",
"runtimeArgs": []
}
},
"default-runtime": "nvidia"
}
You can select the runtime during container creation. With docker-compose it is only possible with format version 2.3.
Here is a sample docker-compose.yml to launch Tensorflow with GPU:
version: "2.3" # the only version where 'runtime' option is supported
services:
test:
image: tensorflow/tensorflow:2.3.0-gpu
# Make Docker create the container with NVIDIA Container Toolkit
# You don't need it if you set 'nvidia' as the default runtime in
# daemon.json.
runtime: nvidia
# the lines below are here just to test that TF can see GPUs
entrypoint:
- /usr/local/bin/python
- -c
command:
- "import tensorflow as tf; tf.test.is_gpu_available(cuda_only=False, min_cuda_compute_capability=None)"
By running this with docker-compose up you should see a line with the GPU specs in it. It appears at the end and looks like this:
test_1 | 2021-01-23 11:02:46.500189: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1402] Created TensorFlow device (/device:GPU:0 with 1624 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1050, pci bus id: 0000:01:00.0, compute capability: 6.1)
And that is all you need to launch an official Tensorflow image with GPU.
NVIDIA Environment Variables and custom images
As I mentioned, NVIDIA Container Toolkit works as the default runtime unless some variables are present. These are listed and explained here. You only need to care about them if you build a custom image and want to enable GPU support in it. Official Tensorflow images with GPU have them inherited from CUDA images they use a base, so you only need to start the image with the right runtime as in the example above.
If you are interested in customising a Tensorflow image, I wrote another post on that.
Host Configuration for NVIDIA driver in container
As mentioned in the beginning, this is not something you want on a workstation. The process require you to start the driver container when no other display driver is loaded (that is via SSH, for example). Furthermore, at the moment of writing only Ubuntu 16.04, Ubuntu 18.04 and Centos 7 were supported.
There is an official guide and below are extractions from it for Ubuntu 18.04.
Edit 'root' option in NVIDIA Container Toolkit settings:
sudo sed -i 's/^#root/root/' /etc/nvidia-container-runtime/config.toml
Disable the Nouveau driver modules:
sudo tee /etc/modules-load.d/ipmi.conf <<< "ipmi_msghandler" \
&& sudo tee /etc/modprobe.d/blacklist-nouveau.conf <<< "blacklist nouveau" \
&& sudo tee -a /etc/modprobe.d/blacklist-nouveau.conf <<< "options nouveau modeset=0"
If you are using an AWS kernel, ensure that the i2c_core kernel module is enabled:
sudo tee /etc/modules-load.d/ipmi.conf <<< "i2c_core"
Update the initramfs:
sudo update-initramfs -u
Now it's time to reboot for the changes to take place. After reboot check that no nouveau or nvidia modules are loaded. The commands below should return nothing:
lsmod | grep nouveau
lsmod | grep nvidia
Starting driver in container
The guide offers a command to run the driver, I prefer docker-compose. Save the following as driver.yml:
version: "3.0"
services:
driver:
image: nvidia/driver:450.80.02-ubuntu18.04
privileged: true
restart: unless-stopped
volumes:
- /run/nvidia:/run/nvidia:shared
- /var/log:/var/log
pid: "host"
container_name: nvidia-driver
Use docker-compose -f driver.yml up -d to start the driver container. It will take a couple of minutes to compile modules for your kernel. You may use docker logs nvidia-driver -f to overview the process, wait for 'Done, now waiting for signal' line to appear. Otherwise use lsmod | grep nvidia to see if the driver modules are loaded. When it's ready you should see something like this:
nvidia_modeset 1183744 0
nvidia_uvm 970752 0
nvidia 19722240 17 nvidia_uvm,nvidia_modeset
Docker Compose v1.27.0+
since 2022 version 3.x
version: "3.6"
services:
jupyter-8888:
image: "tensorflow/tensorflow:latest-gpu-jupyter"
env_file: "env-file"
deploy:
resources:
reservations:
devices:
- driver: "nvidia"
device_ids: ["0"]
capabilities: [gpu]
ports:
- 8880:8888
volumes:
- workspace:/workspace
- data:/data
if you want to specify different GPU id eg. 0 and 3
device_ids: ['0', '3']
Managed to get it working by installing WSL2 on my windows machine to to use VS Code along with the Remote-Containers extension. Here is a collection of articles that helped a lot with the installation of WSL2 and using VS Code from within it:
https://learn.microsoft.com/en-us/windows/wsl/install-win10
ubuntu.com/blog/getting-started-with-cuda-on-ubuntu-on-wsl-2
https://code.visualstudio.com/docs/remote/containers
With the remote-containers extension of VS Code, you can then setup you devcontainer based on a docker-compose file (or just a Dockerfile as I did), which is probably better explained in the third link above. One thing for myself to remember is that when defining the .devcontainer.json file you need to make sure to set
// Optional arguments passed to ``docker run ... ``
"runArgs": [
"--gpus", "all"
]
Before VS Code, I have used Pycharm for a long time, so switching to VS Code was quite a pain at first, but VS Code along with WSL2, the remote-containers, and the pylance extension have made it quite easy to develop in a container with GPU support. As far as I know Pycharcm doesnt support debugging inside a container in WSL atm, because of
https://intellij-support.jetbrains.com/hc/en-us/community/posts/360009752059-Using-docker-compose-interpreter-on-wsl-project-Windows-
https://youtrack.jetbrains.com/issue/WI-53325
I would like to run 2 docker images with docker-compose.
one image should run with nvidia-docker and the other with docker.
I've seen this post use nvidia-docker-compose launch a container, but exited soon
but this is not working for me(not even running only one image)...
any idea would be great.
UPDATE : please check nvidia-docker 2 and its support of docker-compose first
https://github.com/NVIDIA/nvidia-docker/wiki/Frequently-Asked-Questions#do-you-support-docker-compose
(I'd first suggest adding the nvidia-docker tag).
If you look at the nvidia-docker-compose code here it only generates a specific docker-file for docker-compose after a query of the nvidia configuration on localhost:3476.
You can also make by hand this docker-compose file as they turn out to be quite simple, follow this example, replace 375.66 with your nvidia driver version and put as many /dev/nvidia[n] lines as you have graphic cards (did not try to put services on separate GPUs but go for it !):
services:
exampleservice0:
devices:
- /dev/nvidia0
- /dev/nvidia1
- /dev/nvidiactl
- /dev/nvidia-uvm
- /dev/nvidia-uvm-tools
environment:
- EXAMPLE_ENV_VARIABLE=example
image: company/image
volumes:
- ./disk:/disk
- nvidia_driver_375.66:/usr/local/nvidia:ro
version: '2'
volumes:
media: null
nvidia_driver_375.66:
external: true
Then just run this hand-made docker-compose file with a classic docker-compose command.
Maybe you can then compose with non nvidia dockers by skipping the nvidia specific stuff in the other services.
Additionally to the accepted answer, here's my approach, a bit shorter.
I needed to use the old version of docker-compose (2.3) because of the required runtime: nvidia (won't necessarily work with version: 3 - see this). Setting NVIDIA_VISIBLE_DEVICES=all will make all the GPUs visible.
version: '2.3'
services:
your-service-name:
runtime: nvidia
environment:
- NVIDIA_VISIBLE_DEVICES=all
# ...your stuff
My example is available here.
Tested on NVIDIA Docker 2.5.0, Docker CE 19.03.13 and NVIDIA-SMI 418.152.00 and CUDA 10.1 on Debian 10.
I'm starting a new project with Symfony 3 and I want to use Docker for the development environment. We will work on this project with a dozen developers so I want to have an easy install process.
Here's my docker-compose.yml
version: '2'
services:
db:
image: mysql
ports:
- "3307:3306"
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: mydb
MYSQL_USER: root
MYSQL_PASSWORD: root
php:
build: ./php-fpm
expose:
- "9001"
volumes:
- .:/var/www/project
- ./var/logs:/var/www/project/app/logs
links:
- db
nginx:
build: ./nginx
ports:
- "8001:80"
links:
- php
volumes_from:
- php
volumes:
- ./var/logs/nginx/:/var/log/nginx
I installed the recent Docker for Mac application (beta). The big issue is that my symfony app is very very slow (a simple page takes more than 5 seconds). The same app with MAMP is much faster (500ms max). Is this a know issue of Docker ? How can I debug it ?
This is a known issue. Your local file system is being mounted in the Docker for Mac linux VM with osxfs, there is some additional latency when reading and writing these mounted files. For small applications this isn't too noticeable, but for larger applications that could read thousands of files on a single request it is can slow things down significantly.
Sorry for the late answer but you could install Docker CE Edge, because it supports cache mode.
Download Docker-Edge (waiting for the stable version of docker that will support cached mode)
Add the following line to your docker-compose.yml file
Blockquote
php:
volumes:
- ${SYMFONY_APP_PATH}:/var/www/symfony:cached
Replace ${SYMFONY_APP_PATH} by your own path.
Actually I'm using docker to run projects locally. To run Docker faster I used the below setup:
MAC OSX:
Docker Toolbox
Install normaly the dmg file.
Open your terminal and type:
`$ docker-machine create --driver virtualbox default `
`$ docker-machine env default`
`eval "$(docker-machine env default)"`
Now you have the docker-machine up and running, any docker-compose, docker command will run "inside the machine".
In our case "Symfony" is a large application. The docker-machine file system is under osxfs, so the application will be very slow.
docker-machine-nfs
Install with:
curl -s https://raw.githubusercontent.com/adlogix/docker-machine-nfs/master/docker-machine-nfs.sh | sudo tee /usr/local/bin/docker-machine-nfs > /dev/null && \ sudo chmod +x /usr/local/bin/docker-machine-nfs
Running
It will be necessary to type the root password
$ docker-machine-nfs default
Now your docker-machine is running under the nfs file system.
The speed will be regular.
Mapping your docker-machine to localhost
Regulary the docker container will run under 192.168.99.100:9000
Running on terminal:
$ vboxmanage modifyvm default --natpf1 "default-map,tcp,,9000,,9000'
You can access from localhost:9000
It's possible to get performance with Docker for Mac almost as fast as native shared volumes with Linux by using Mutagen. A benchmark is available here.
I created a full example for a Symfony project, it can be used for any type of project in any language.
I had a similar problem. In my case I was running a python script within a docker container and it was really slow. The way I solved this is using the "old" docker-toolbox.
It's not ideal, but worked for me
I have a detailed solution to this problem in my answer here, docker on OSX slow volumes, please check it out.
I got it where there is no slow downs and no extra software to install.
Known issue
This is known issue https://forums.docker.com/t/file-access-in-mounted-volumes-extremely-slow-cpu-bound/8076.
I won't recommend https://www.docker.com/products/docker-toolbox if you have https://www.docker.com/docker-mac.
Docker for Mac does not use VirtualBox, but rather HyperKit, a
lightweight macOS virtualization solution built on top of
Hypervisor.framework in macOS 10.10 Yosemite and higher.
https://docs.docker.com/docker-for-mac/docker-toolbox/#the-docker-for-mac-environment
My workaround
I have created workaround which may help you. I use http://docker-sync.io/ for my symfony project. Before using docker-sync page was loading 30 sec, now it's below 1 sec - https://github.com/Arkowsky/docker_symfony
I am trying to volume mount an nfs share but I am running into some issues with that. When I run a regular docker command such as:
docker run -i -t privileged=true -v /mnt/bluearc:/mnt/bluarc -v /net:/net ubuntu bash
I have my desired drive mounted at /mnt/bluearc. However, if I run it with docker-compose:
test_ser:
container_name: test_ser
hostname: test_ser
image: ubuntu
restart: always
working_dir: /repo/drop_zone_dub
volumes_from:
- nerve_repo_data
volumes:
- /mnt/bluearc:/mnt/bluearc
- /net:/net
privileged: true
command: bash
When I try to access the directories I get the following error:
Too many levels of symbolic links
What is compose doing differently that would cause this?
I had the same issue and find a hidden docker parameter here:
https://github.com/moby/moby/issues/24303
-v /nfs:/nfs:shared
It works for me so far.
I suspect this is related to Docker and Automounting. See https://serverfault.com/questions/640895/why-do-some-host-volumes-in-docker-containers-give-the-error-too-many-levels-of
It seems to just be something Docker can't do.
we usually use:
-v /nfs:/nfs:slave
Which we found to work better with autofs/auto-mounter.
In this thread, I found a solution
https://github.com/docker/for-win/issues/5763
Reverting to an older version of docker-desktop via chocolately helped me.
choco uninstall docker-desktop
choco install docker-desktop --version=2.1.0.5 --allow-downgrade
This problem apparently is a problem with the linux kernel used by windows.