I'm starting a new project with Symfony 3 and I want to use Docker for the development environment. We will work on this project with a dozen developers so I want to have an easy install process.
Here's my docker-compose.yml
version: '2'
services:
db:
image: mysql
ports:
- "3307:3306"
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: mydb
MYSQL_USER: root
MYSQL_PASSWORD: root
php:
build: ./php-fpm
expose:
- "9001"
volumes:
- .:/var/www/project
- ./var/logs:/var/www/project/app/logs
links:
- db
nginx:
build: ./nginx
ports:
- "8001:80"
links:
- php
volumes_from:
- php
volumes:
- ./var/logs/nginx/:/var/log/nginx
I installed the recent Docker for Mac application (beta). The big issue is that my symfony app is very very slow (a simple page takes more than 5 seconds). The same app with MAMP is much faster (500ms max). Is this a know issue of Docker ? How can I debug it ?
This is a known issue. Your local file system is being mounted in the Docker for Mac linux VM with osxfs, there is some additional latency when reading and writing these mounted files. For small applications this isn't too noticeable, but for larger applications that could read thousands of files on a single request it is can slow things down significantly.
Sorry for the late answer but you could install Docker CE Edge, because it supports cache mode.
Download Docker-Edge (waiting for the stable version of docker that will support cached mode)
Add the following line to your docker-compose.yml file
Blockquote
php:
volumes:
- ${SYMFONY_APP_PATH}:/var/www/symfony:cached
Replace ${SYMFONY_APP_PATH} by your own path.
Actually I'm using docker to run projects locally. To run Docker faster I used the below setup:
MAC OSX:
Docker Toolbox
Install normaly the dmg file.
Open your terminal and type:
`$ docker-machine create --driver virtualbox default `
`$ docker-machine env default`
`eval "$(docker-machine env default)"`
Now you have the docker-machine up and running, any docker-compose, docker command will run "inside the machine".
In our case "Symfony" is a large application. The docker-machine file system is under osxfs, so the application will be very slow.
docker-machine-nfs
Install with:
curl -s https://raw.githubusercontent.com/adlogix/docker-machine-nfs/master/docker-machine-nfs.sh | sudo tee /usr/local/bin/docker-machine-nfs > /dev/null && \ sudo chmod +x /usr/local/bin/docker-machine-nfs
Running
It will be necessary to type the root password
$ docker-machine-nfs default
Now your docker-machine is running under the nfs file system.
The speed will be regular.
Mapping your docker-machine to localhost
Regulary the docker container will run under 192.168.99.100:9000
Running on terminal:
$ vboxmanage modifyvm default --natpf1 "default-map,tcp,,9000,,9000'
You can access from localhost:9000
It's possible to get performance with Docker for Mac almost as fast as native shared volumes with Linux by using Mutagen. A benchmark is available here.
I created a full example for a Symfony project, it can be used for any type of project in any language.
I had a similar problem. In my case I was running a python script within a docker container and it was really slow. The way I solved this is using the "old" docker-toolbox.
It's not ideal, but worked for me
I have a detailed solution to this problem in my answer here, docker on OSX slow volumes, please check it out.
I got it where there is no slow downs and no extra software to install.
Known issue
This is known issue https://forums.docker.com/t/file-access-in-mounted-volumes-extremely-slow-cpu-bound/8076.
I won't recommend https://www.docker.com/products/docker-toolbox if you have https://www.docker.com/docker-mac.
Docker for Mac does not use VirtualBox, but rather HyperKit, a
lightweight macOS virtualization solution built on top of
Hypervisor.framework in macOS 10.10 Yosemite and higher.
https://docs.docker.com/docker-for-mac/docker-toolbox/#the-docker-for-mac-environment
My workaround
I have created workaround which may help you. I use http://docker-sync.io/ for my symfony project. Before using docker-sync page was loading 30 sec, now it's below 1 sec - https://github.com/Arkowsky/docker_symfony
Related
I am getting this error when docker-compose up on one of the containers only.
exec: "com.docker.cli": executable file not found in $PATH
The terminal process "/bin/zsh '-c', 'docker logs -f f6557b5dd19d9b2bc5a63a840464bc2b879d375fe72bc037d82a5358d4913119'" failed to launch (exit code: 1).
I uninstalled and reinstalled docker desktop#2.3.0.5 on Mac
docker-compose build from scratch
other containers are running
I get the above error.
It used to be running. I am not sure why this is happening. I know that I upgraded docker from I think 2.3
also I think I received an update on my mac
Dockerfile
FROM tiangolo/uvicorn-gunicorn:python3.8
COPY requirements.txt /app/
RUN pip install -r requirements.txt
COPY ./app /app/app
#COPY config.py /app/app/
docker-compose.yml
version: "3"
services:
postgresql:
container_name: postgresql
image: postgres:12
ports:
- "5433:5432"
environment:
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- POSTGRES_DB=${POSTGRES_DB}
volumes:
- ./postgres-data:/var/lib/postgresql/data
fastapi:
build:
context: ./fastapi/
dockerfile: Dockerfile
volumes:
- ./fastapi/app/imgs:/app/app/imgs
ports:
- "1001:80"
depends_on:
- postgresql
env_file:
- .env
pgadmin:
container_name: pgadmin
image: dpage/pgadmin4
environment:
- PGADMIN_DEFAULT_EMAIL=pgadmin4#pgadmin.org
- PGADMIN_DEFAULT_PASSWORD=admin
ports:
- "5050:80"
depends_on:
- postgresql
solr:
build:
context: ./solr/
dockerfile: Dockerfile
restart: always
ports:
- "8983:8983"
volumes:
- data:/var/solr
volumes:
data:
update:
It worked when I downgraded to docker desktop 2.3.0.4
Updated Answer:
Since VSCode Docker 1.14.0 you can now set the Docker executable path in the settings, which should help in most cases.
Old Answer (Option was removed from Docker Desktop):
The Desktop Docker Version 2.4.0.0 is working for me after I did deactivate the feature Enable cloud experience. You can find it under Preferences --> Command Line.
If you are still experience the problem, you may try a clean remove and install of Docker and also make sure that Docker is actually running, see other possible solution(s) here.
History of GitHub Issues:
https://github.com/docker/for-mac/issues/4956
https://github.com/microsoft/vscode-docker/issues/2366
https://github.com/microsoft/vscode-docker/issues/2578
https://github.com/microsoft/vscode-docker/issues/2894
Status (2021-06-22): VSCode Version 1.57.0 seems to have fixed the issue again.
You might get the following error message simply because you did not start Docker just yet
exec: "com.docker.cli": executable file not found in $PATH
In my case the problem was I had installed and then crudely removed the docker compose cli. This resulted in the above error to start popping up.
I got the compose CLI back using instructions from https://docs.docker.com/cloud/ecs-integration/#install-the-docker-compose-cli-on-linux and running (as root):
curl -L https://raw.githubusercontent.com/docker/compose-cli/main/scripts/install/install_linux.sh | sh
This fixed it for me.
Note: I would not recommend installing docker-compose cli to fix this issue, but to share some insights in case this is applicable to you as well.
Ensure that docker CLI is installed not just docker desktop on Linux. YOu can install it using:
sudo apt install docker.io
Update: The "cloud experience" no longer exists even as an experimental feature in Docker Desktop v3.2.1. This should no longer be an issue.
If you continue to see this problem on a newer version, you will need to downgrade to Docker v3.1.0, disable the cloud experience feature, then upgrade to the newest version.
Had the exact same issue. Was fixed after starting the upgraded docker first, then running this command.
dostarr#DOSTARR-M-38LF ~ % docker run busybox
exec: "com.docker.cli": executable file not found in $PATH
<started docker>
dostarr#DOSTARR-M-38LF ~ % docker run busybox
dostarr#DOSTARR-M-38LF ~ %
I had the same problem when trying to run minikube tunnel, and since I didn't want to re-install anything, I ended up running it from the docker bin path (on Windows it's in 'C:\Program Files\Docker\Docker\resources\bin') and it worked.
An alternative to Docker Desktop is colima, container runtimes on macOS (and Linux) with minimal setup.
# Homebrew
brew install colima docker
colima start
Now, you can use the docker commands as before.
For docker compose commands, you have to install:
brew install docker-compose
if already have installed docker, it may not have started. So type in terminal,"docker run -d -p 80:80 docker/getting-started" and it should solve the issue.
I am trying to install the latest Elasticsearch on my Raspberry Pi 3 by following the installation tutorial, however I found absolutely
Some info about my system:
$ sudo apt-get update
$ sudo apt-get upgrade
$ cat /etc/os-release
PRETTY_NAME="Raspbian GNU/Linux 9 (stretch)"
NAME="Raspbian GNU/Linux"
VERSION_ID="9"
VERSION="9 (stretch)"
ID=raspbian
ID_LIKE=debian
$ java -version
openjdk version "9-Raspbian"
OpenJDK Runtime Environment (build 9-Raspbian+0-9b181-4bpo9rpt1)
OpenJDK Server VM (build 9-Raspbian+0-9b181-4bpo9rpt1, mixed mode)
// I tried also with openjdk-java-8
What I've tried
install via sudo-apt
$ sudo apt-get install elasticsearch
....
Preparing to unpack .../elasticsearch_1.7.5-1_all.deb .
....
$ ./usr/share/elasticsearch/bin/elasticsearch
xception in thread "main" java.lang.NoClassDefFoundError: org/elasticsearch/common/jackson/dataformat/yaml/snakeyaml/error/YAMLException
at org.elasticsearch.common.jackson.dataformat.yaml.YAMLFactory._createParser(YAMLFactory.java:426)
at org.elasticsearch.common.jackson.dataformat.yaml.YAMLFactory.createParser(YAMLFactory.java:327)
at org.elasticsearch.common.xcontent.yaml.YamlXContent.createParser(YamlXContent.java:90)
at org.elasticsearch.common.settings.loader.XContentSettingsLoader.load(XContentSettingsLoader.java:45)
at org.elasticsearch.common.settings.loader.YamlSettingsLoader.load(YamlSettingsLoader.java:46)
at org.elasticsearch.common.settings.ImmutableSettings$Builder.loadFromStream(ImmutableSettings.java:982)
at org.elasticsearch.common.settings.ImmutableSettings$Builder.loadFromUrl(ImmutableSettings.java:969)
at org.elasticsearch.node.internal.InternalSettingsPreparer.prepareSettings(InternalSettingsPreparer.java:110)
at org.elasticsearch.bootstrap.Bootstrap.initialSettings(Bootstrap.java:144)
at org.elasticsearch.bootstrap.Bootstrap.main(Bootstrap.java:215)
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:32)
Caused by: java.lang.ClassNotFoundException: org.elasticsearch.common.jackson.dataformat.yaml.snakeyaml.error.YAMLException
at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:582)
at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:185)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:496)
... 11 more
got to elasticsearch downloads and get the tar-file
/elasticsearch-7.1.1/bin/elasticsearch
./elasticsearch-7.1.1/bin/elasticsearch-env: line 69: /home/pi/elasticsearch-7.1.1/jdk/bin/java: cannot execute binary file: Exec format error
docker path
$ docker --version
Docker version 18.04.0-ce, build 3d479c0
$ docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
docker.elastic.co/elasticsearch/elasticsearch 7.1.1 b0e9f9f047e6 4 weeks ago 894MB
$ docker run -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" docker.elastic.co/elasticsearch/elasticsearch:7.1.1
standard_init_linux.go:190: exec user process caused "exec format error"
Has anyone managed to install Elasticsearch 7 on Raspberry Pi 3? Is there any way to go around the issues listed above?
Unfortunately, unlike all previous releases, the deb package for ElasticSearch 7 is only packaged for Intel architectures. I believe the dependencies are the JVM and the machine learning module, which can be turned off, but it would have to be repackaged or installed by hand from the files in the deb package. (If I don't get round to doing it, I'm, sure someone else will eventually).
Unless you particularly need ES7 features, the easiest thing would be to install the last version 6, which will install on Raspbian. It's here: https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.8.0.deb
You will want to change the default memory used from 1G to 512M in /etc/elasticsearch/jvm.options and turn off machine learning in /etc/elasticsearch/elasticsearch.yml (xpack.ml.enabled: false).
While it will run with openjre, the default Java run time on Raspbian, it runs about 30 times slower than on an equivalent Intel. I've never got to the bottom of why, but it is fine if you install the Oracle JRE instead:
apt-get install oracle-java8-jdk
Note the Raspbian/Debian repo (as in apt-get install) is version 1 not v7 - ancient, avoid it.
In extensive use of ES6 (and its predecessors) on Raspberry Pi, I have not found anything to work differently from Intel, despite their statement that they don't support anything other than Intel.
However, RPi struggles to run the whole ELK (Elasticsearch, Logstatsh, Kibana) stack (I did try that): it really doesn't have enough memory. The RPi 4 with 4GB might do better, I haven't tried, or distributed across three separate Pis. I did get ELK 5 to run but it exhausted memory after a few day's use, and I couldn't get ELK 6 to run at all.
On Raspbian-9 after a test to install elasticsearch-7 and purge it to install elasticsearch-6, in addition to what is said above, I had to define JAVA_HOME in /etc/default/elasticsearch :
# Elasticsearch Java path
JAVA_HOME=/usr/lib/jvm/java-8-openjdk-armhf
The user was not the right one for two folders, to fix it :
sudo chown -R elasticsearch:elasticsearch /etc/elasticsearch
sudo chown -R elasticsearch:elasticsearch /var/lib/elasticsearch
In our case at charik.org, we are running a cluster of RPI4 with Ubuntu server 19.10 which is the only OS fully supporting Arm64v8 architecture on Raspberry Pi.
The decision to use this OS was not easy, coz it consumes more memory than a lightweight raspbian, but the ease of use can fully explain our decision.
We build Elasticsearch v7.5.1 image for Arm64v8 architecture from ES package with no jdk embedded found here: elasticsearch-no-jdk.
To checkout our Docker Hub repo for the build image: charik/elasticsearch:latest
ElasticSearch embed is own java binaries into the folder jdk.
You can define JAVA_HOME from your own system with :
JAVA_HOME=/usr ./bin/elasticsearch
Then you will be unsupported, but, can use elasticsearch on arm...
In order to be able to use easily Elasticsearch, Kibana and Elastalert on raspberrypi we made available on dockerhub, those 3 docker images:
Elasticsearch: https://hub.docker.com/r/comworkio/elasticsearch
Kibana: https://hub.docker.com/r/comworkio/kibana
Elastalert: https://hub.docker.com/r/comworkio/elastalert
Here's the git repository containing the Dockerfiles and documentations: https://gitlab.comwork.io/oss/elasticstack/elasticstack-arm (the docker images are built on raspberrypi using as gitlab runners then pushed on the dockerhub).
We'll keep them up to date with the right tags until elastic will take care of it (I think they will provide arm-based images some days, after discussing this matter with them).
Here's an example of docker-compose file you can use on a single raspberrypi:
version: "3.3"
services:
es01:
image: comworkio/elasticsearch:7.9.1-1.8-arm
container_name: es01
ports:
- 9200:9200
- 9300:9300
networks:
- covid19
volumes:
- data01:/usr/share/elasticsearch/data
kib01:
image: comworkio/kibana:7.9.1-1.9-arm
container_name: kib01
ports:
- 5601:5601
environment:
- ES_PROTO=http
- ES_HOST=es01
- ES_PORT=9200
networks:
- covid19
depends_on:
- es01
volumes:
data01:
driver: local
networks:
covid19:
driver: bridge
Then here you go with:
docker-compose up -d
Then your kibana is accessible on http://{your_raspberrypi}:5601 and your elasticsearch api on http://{your_raspberrypi}:9200. This is working pretty fine with a raspberrypi 4 model B 8gb ram. If you don't have this model but an older one, you can use two of them with at least 2gb for your elastic node and 2gb for your kibana node. I also advise you tu use a model 4 in order to be able to boot on a SSD drive instead of an SD flash.
For the french speakers, here's a demo using those images: https://youtu.be/BC1iSnoe15k
And the repository of the project with some documentations (in english): https://gitlab.comwork.io/oss/covid19
Article helps you to install #elasticsearch on #raspberrypi.
normally its difficult to install Elasticsearch on raspberry with its own jdk which is not support for armf platform. So in this article we run Elasticsearch with the help of no_jdk bundle of Elasticsearch and provide jdk(java Home ) of raspberry pi.
if there is any difficulty be free to ask
#raspberrypi #elastic #elasticsearch
Click here for article
I had Docker for Windows, switched to Docker toolbox and now back to Docker for Windows and I ran into the issues with Volumes.
Before volumes were working perfectly fine and my containers which running with nodemon/tsnode/CLI watching files were restarting properly on source code change, but now they don't at all, so it looks like file changes from host are not populated in the container.
This is docker-compose for one service:
api:
build:
context: ./api
dockerfile: Dockerfile-dev
volumes:
- ./api:/srv
working_dir: /srv
links:
- mongo
depends_on:
- mongo
ports:
- 3030:3030
environment:
MONGODB: mongodb://mongo:27017/api_test
labels:
- traefik.enable=true
- traefik.frontend.rule=Host:api.mydomain.localhost
This id Dockerfile-dev
FROM node:10-alpine
ENV NODE_ENV development
WORKDIR /srv
EXPOSE 3030
CMD yarn dev // simply nodemon, working when ran from host
Can anyone help with that?
C drive is shared and verified with docker run --rm -v c:/Users:/data alpine ls /data showing list of files properly.
I will really appreciate any help.
We experienced the exact same problems in our team while developing nodejs/typescript applications with Docker on top of Windows and it has always been a big pain. To be honest, though, Windows does the right thing by not propagating the change event to the containers (Linux hosts also do not propagate the fsnotify events to containers unless the change is made from within the container). So bottom line: I do not think this issue will be avoidable unless you actually change the files within the container instead of changing them on the docker host. You can achieve this with a code sync tool like docker-sync, see this page for a list of available options: https://github.com/EugenMayer/docker-sync/wiki/Alternatives-to-docker-sync
Because we struggled with such issues for a long time, a colleague and I started an open source project called DevSpace CLI: https://github.com/covexo/devspace
The DevSpace CLI can establish a reliable and super fast 2-way code sync between your local folders and folders within your dev containers (works with any Kubernetes cluster, any volume and even with ephemeral / non-persistent folders) and it is designed to work perfectly with hot reloading tools such as nodemon. Setup minikube or a cluster with a one-click installer on some public cloud, run devspace up inside your project and you will be ready to program within your DevSpace without ever having to worry about local Docker issues and hot reloading problems. Let me know if it works for you or if there is anything you are missing.
I've been stuck into this recently (Feb 2020, Docker Desktop 2.2) and nothing from the base solutions really helped.
However when I tried WSL 2 and ran my docker-compose from inside Ubuntu shell, it became to pick up the changes in the files instantly. So if someone is observing this - try to up Docker from WSL 2.
I'm having issues calling what is supposed to have been defined in some Docker Compose services from my "main" (web) service. I have the following docker-compose.yml file:
version: '2'
services:
db:
image: postgres
volumes:
- postgres-db-volume:/data/postgres
pdftk:
image: mnuessler/pdftk
volumes:
- /tmp/manager:/work
ffmpeg:
image: jrottenberg/ffmpeg
volumes:
- /tmp/manager:/files
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
- /tmp/manager:/files
ports:
- "8000:8000"
depends_on:
- db
- pdftk
- ffmpeg
volumes:
postgres-db-volume:
I'm able to use db from web perfectly, but unfortunately not pdftk or ffmpeg (these are just command-line utilities that are undefined when I run web's shell):
manager$ docker-compose run web bash
Starting manager_ffmpeg_1
Starting manager_pdftk_1
root#16e4b755172d:/code# pdftk
bash: pdftk: command not found
root#16e4b755172d:/code# ffmpeg
bash: ffmpeg: command not found
How can I get pdftk and ffmpeg to be defined within the web service? Is depends_on not the appropriate directive? Should I be extending web's Dockerfile to call an entry-point script that installs the content found in the other two services (even though this'd seem counterproductive)?
Tried to remove and rebuild the web service after adding pdftk and ffmpeg, but that didn't solve it.
What can I do?
Thank you!
Looks like an misunderstanding of "depends_on". It is used to set a starting order for containers.
For example: Start Database before Webserver etc.
If you want access to tools, installed in other containers, you have to open an ssh connection for example:
ssh pdftk <your command>
But i would install the nessecary tools into the web container image.
Extend the Dockerfile to install both tools should do the trick.
To access the "tools" you do not need to install SSH, this is most probably pretty complicated and not wanted. The containers are not "merged into one" when using depends_on.
Depends_on is even less then starting order, its more "ruff container start order. E.g. eventhough app depends on DB, it will happen, that the db container process did not yet get fully started while app has already been started - depends_on right now is, in most cases, rather used to notify when a container needs re-initialization when a other container he depends on e.g. does get recreated.
Other then that, start your containers and mount the docker-socket into them. Add this:
services:
web:
volumes:
- /var/run/docker.sock:/var/run/docker.sock
Now, on the webserver, where you need docker to be installed, you can do:
docker exec pdftk <thecommand>
Thats the usual way to run commands on services.
You can of course use http/api based implementations, in this case, you do not need to expose any ports or mount the socket, more, you can access the services using their service-name
ping pdftk or ping ffmpeg
Edit: The described method below, does not work for the OP's question. Still leaving it here as educational information.
Besides the options described by #opHASnoNAME ... you could try declaring a container-volume for pdftk and ffmpeg and use the binaries directly, like so:
ffmpeg:
volumes:
- /usr/local/bin
and mount this on your web container:
web:
volumes_from:
- ffmpeg
Please note that this approach has some limitations:
the path /usr/local/bin mounted from ffmpeg should not exist in web, otherwise you might need to mount the files, only.
in web, /usr/local/bin must be in your $PATH.
since this is kinda hotlinking of binaries, this might fail due to different Linux versions, missing shared libraries, etc... - so it actually works only for standalone binaries
all containers using volumes and volumes_from have to be deployed on the same host
But I am still using this here and there, i.e. with the docker or docker-compose binaries.
I am trying to volume mount an nfs share but I am running into some issues with that. When I run a regular docker command such as:
docker run -i -t privileged=true -v /mnt/bluearc:/mnt/bluarc -v /net:/net ubuntu bash
I have my desired drive mounted at /mnt/bluearc. However, if I run it with docker-compose:
test_ser:
container_name: test_ser
hostname: test_ser
image: ubuntu
restart: always
working_dir: /repo/drop_zone_dub
volumes_from:
- nerve_repo_data
volumes:
- /mnt/bluearc:/mnt/bluearc
- /net:/net
privileged: true
command: bash
When I try to access the directories I get the following error:
Too many levels of symbolic links
What is compose doing differently that would cause this?
I had the same issue and find a hidden docker parameter here:
https://github.com/moby/moby/issues/24303
-v /nfs:/nfs:shared
It works for me so far.
I suspect this is related to Docker and Automounting. See https://serverfault.com/questions/640895/why-do-some-host-volumes-in-docker-containers-give-the-error-too-many-levels-of
It seems to just be something Docker can't do.
we usually use:
-v /nfs:/nfs:slave
Which we found to work better with autofs/auto-mounter.
In this thread, I found a solution
https://github.com/docker/for-win/issues/5763
Reverting to an older version of docker-desktop via chocolately helped me.
choco uninstall docker-desktop
choco install docker-desktop --version=2.1.0.5 --allow-downgrade
This problem apparently is a problem with the linux kernel used by windows.