I have a raspberry pi and I have installed dockers in it. I have made a python script to read gpio status in it. So when I run the below command
sudo docker run -it --device /dev/gpiomem app-image
It runs perfectly and shows the gpio status. Now I have created a docker-compose.yml file as I want to deploy this app.py to the swarm cluster which I have created.
Below is the content of docker-compose.yml
version: "3"
services:
app:
image: app-image
deploy:
mode: global
restart: always
privileged: true
When I start the deployment using sudo docker stack deploy command, the image is deployed but it gives error:
No access to /dev/mem. Try running as root
So it says that it do not have access to /dev/mem, but this is very strange when I am using device, why the service do not have access. It also says trying running as root which I think all the containers are in root already. I also tried giving the full permissions to the file by including the command chmod 777 /dev/gpiomem in the code but it still shows this error.
My main question is that when it runs normally using docker run.. command why it is showing error in docker-compose file when deploying using sudo docker stack deploy.? How to resolve this issue.?
Thanks
Adding devices, capabilities, and using privileged mode are not supported in swarm mode. Those options in the yml file exist for using docker-compose instead of docker stack deploy. You can track the progress on getting these features added to swarm mode in github issue #24862.
Since all you need to do is access a device, you may have luck adding the file for the device as a volume, but that's a shot in the dark:
volumes:
- /dev/gpiomem:/dev/gpiomem
As stated in docker-compose devices
Note: This option is ignored when deploying a stack in swarm mode with
a (version 3) Compose file.
The devices option is ignored in swarm. You can use privileged: true which will give access to all devices.
Related
Is there better way to avoid folder permission issues when a relative folder is being set in a docker compose file when using manjaro?
For instance, take the bitnami/elasticsearch:7.7.0 image as an example:
This image as an example will always throw the ElasticsearchException[failed to bind service]; nested: AccessDeniedException[/bitnami/elasticsearch/data/nodes]; error.
I can get around in by:
create the data directory with sudo, followed by chmod 777
attaching a docker volume
But I am looking for a bit easier to manage solution, similar to the docker experience in Ubuntu and OSX which I do not have to first create a directory with root in order for folder mapping to work.
I have made sure that my user is in the docker group by following the post install instructions on docker docs. I have no permission issues when accessing docker info, or sock.
docker-compose.yml
version: '3.7'
services:
elasticsearch:
image: bitnami/elasticsearch:7.7.0
container_name: elasticsearch
ports:
- 9200:9200
networks:
- proxy
environment:
- ELASTICSEARCH_HEAP_SIZE=512m
volumes:
- ./data/:/bitnami/elasticsearch/data
- ./config/elasticsearch.yml:/opt/bitnami/elasticsearch/config/elasticsearch.yml
networks:
proxy:
external: true
I am hoping for a more seamless experience when using my compose files from git which works fine in other systems, but running into this permission issue on the data folder on manjaro.
I did check other posts on SO, some some are temporary, like disabling selinux, while other require running docker with the --privileged flag, but I am trying to do with from compose.
This has nothing to do with the Linux distribution but is a general problem with Docker and bind mounts. A bind mount is when you mount a directory of your host into a container. The problem is that the Docker daemon creates the directory under the user it runs with (root) and the UID/GIDs are mapped literally into the container.
Not that it is advisable to run as root, but depending on your requirements, the official Elasticsearch image (elasticsearch:7.7.0) runs as root and does not have this problem.
Another solution that would work for the bitnami image is to make the ./data directory owned by group root and group writable, since it appears the group of the Elasticsearch process is still root.
A third solution is to change the GID of the bitnami image to whatever group you had the data created with and make it group writable.
I've been running a dev-setup for a while without issue. I'm using Docker for Windows with Windows Subsystem for Linux 2. It's been working very well. Today when trying to spin up docker-compose, it failed with the following error:
frederik#desktop:~/projects/caselab$ docker-compose -f docker-test.yml up
Recreating f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_caselab_db_1 ...
Recreating f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_caselab_db_1 ... error
ERROR: for f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_caselab_db_1 Cannot create container for service db: mkdir 07ff2055c618dedc240ca3275de3f8c41d091136dc659cf463ee9fc62eed1853: permission denied
ERROR: for db Cannot create container for service db: mkdir 07ff2055c618dedc240ca3275de3f8c41d091136dc659cf463ee9fc62eed1853: permission denied
ERROR: Encountered errors while bringing up the project.
frederik#desktop:~/projects/caselab$
I shaved the contents of docker-test.yml down to simply:
version: '3'
services:
db:
image: postgres
logging:
driver: none
I tried running docker run postgres which worked without issue. I then tried copying all the contents of my folder to another folder. Now, running docker-compose -f docker-test.yml works without issues.
I think it's somehow related to permissions, though I can see no difference in permissions between the original folder and the new one.
As I do most of my editing in Visual Studio Code, running in Windows I'm thinking it may be related to the Windows / Linux boundary, though I'm not completely sure how. And - again - this setup has been running for months without issue so I'm at a loss for what I could have changed.
Any ideas?
I managed to solve it.
I noticed that running docker-compose up prepended a hash to the image name every single time the command was run. This resulted in a comically long image name.
Running docker-compose images showed this image being present.
Simply running docker-compose rm removed the image, which allowed the right image to be created and run.
I have filed this as a bug in docker-compose.
I had Docker for Windows, switched to Docker toolbox and now back to Docker for Windows and I ran into the issues with Volumes.
Before volumes were working perfectly fine and my containers which running with nodemon/tsnode/CLI watching files were restarting properly on source code change, but now they don't at all, so it looks like file changes from host are not populated in the container.
This is docker-compose for one service:
api:
build:
context: ./api
dockerfile: Dockerfile-dev
volumes:
- ./api:/srv
working_dir: /srv
links:
- mongo
depends_on:
- mongo
ports:
- 3030:3030
environment:
MONGODB: mongodb://mongo:27017/api_test
labels:
- traefik.enable=true
- traefik.frontend.rule=Host:api.mydomain.localhost
This id Dockerfile-dev
FROM node:10-alpine
ENV NODE_ENV development
WORKDIR /srv
EXPOSE 3030
CMD yarn dev // simply nodemon, working when ran from host
Can anyone help with that?
C drive is shared and verified with docker run --rm -v c:/Users:/data alpine ls /data showing list of files properly.
I will really appreciate any help.
We experienced the exact same problems in our team while developing nodejs/typescript applications with Docker on top of Windows and it has always been a big pain. To be honest, though, Windows does the right thing by not propagating the change event to the containers (Linux hosts also do not propagate the fsnotify events to containers unless the change is made from within the container). So bottom line: I do not think this issue will be avoidable unless you actually change the files within the container instead of changing them on the docker host. You can achieve this with a code sync tool like docker-sync, see this page for a list of available options: https://github.com/EugenMayer/docker-sync/wiki/Alternatives-to-docker-sync
Because we struggled with such issues for a long time, a colleague and I started an open source project called DevSpace CLI: https://github.com/covexo/devspace
The DevSpace CLI can establish a reliable and super fast 2-way code sync between your local folders and folders within your dev containers (works with any Kubernetes cluster, any volume and even with ephemeral / non-persistent folders) and it is designed to work perfectly with hot reloading tools such as nodemon. Setup minikube or a cluster with a one-click installer on some public cloud, run devspace up inside your project and you will be ready to program within your DevSpace without ever having to worry about local Docker issues and hot reloading problems. Let me know if it works for you or if there is anything you are missing.
I've been stuck into this recently (Feb 2020, Docker Desktop 2.2) and nothing from the base solutions really helped.
However when I tried WSL 2 and ran my docker-compose from inside Ubuntu shell, it became to pick up the changes in the files instantly. So if someone is observing this - try to up Docker from WSL 2.
Context
I want to run a Docker Compose application on a Windows 8. I made it under a Ubuntu 16.04 and it's perfectly working on it.
This Docker Compose run:
nginx
php-fpm
The two containers use volumes.
Files
My .env file:
COMPOSE_CONVERT_WINDOWS_PATHS=1
APPLICATION_PATH=//C/Users/my_user/Documents/Development/my_application
My docker-compose.yml file:
version: '2'
services:
web:
build: ../application-web/
ports:
- "80:80"
tty: true
# Add a volume to link php code on the host and inside the container
volumes:
- ${APPLICATION_PATH}:/usr/share/nginx/html/application
- ${APPLICATION_PATH}/docker_files/docker-assistant:/usr/share/nginx/html/assistant
# Add hostnames to allow devs to call special url to open sites
extra_hosts:
- "localhost:127.0.0.1"
- "assistant.docker:127.0.0.1"
- "application.dev:127.0.0.1"
depends_on:
- custom-php
links:
- custom-php:custom-php
custom-php:
build: ../application-php/
ports:
- "50:50"
volumes:
- ${APPLICATION_PATH}:/usr/share/nginx/html/application
- ${APPLICATION_PATH}/docker_files/docker-assistant:/usr/share/nginx/html/assistant
Problem
When I run docker-compose up, everything goes well. Containers start.
But when I try to reach http://192.168.99.100 in my web browser, I got a 403 error.
My investigations show that there is no mounted volumes in the nginx and the php containers:
docker exec -it compose_web_1 bash
ls -la /usr/share/nginx/html/assistant/
shows
drwxr.xr.x 2 root root 80 May 18 15:30 .
drwxr.xr.x 2 root root 4096 May 18 16:10 ..
It seems that Docker cannot mount volumes. Why?
Other information
I am using the Docker Toolbox: https://www.docker.com/products/docker-toolbox
I know that's the good IP address because when I try to reach it in my web browser, I see my nginx container displaying logs.
The environment variable APPLICATION_PATH set as //C:/Users/my_user/Documents/Development/my_application cannot work because Docker use the ":" character as separator for volume declaration:
ERROR: Volume //C:/Users/my_user/Documents/Development/my_application://C:/Users/my_user/Documents/Development/my_application has incorrect format, should be external:internal[:mode]
It's not a nginx problem because when I create an index.phtml file in the folder, I am able to run it:
<?php
echo 'Hello world!';
Ok, I finally did it!
TL;DR
Follow those instructions to be able to access C:\ inside your containers.
1. Install the Docker Toolbox
Go get it here: https://www.docker.com/products/docker-toolbox
Install it.
2. Run a Hello world
Open a Docker Quickstart Terminal.
Run in it:
docker run hello-world
3. Share C:\ with Docker
Open Virtualbox
Open configuration of the default virtual machine and go to shared folders
Modify or create a new shared folder by clicking on buttons to the right. Set options to:
C:\
C
Auto mount
Permanent configuration
Then validate.
4. Activate sharing
Shutdown the default virtual machine then restart it.
5. Set your paths
e.G. if you have a .env file:
COMPOSE_CONVERT_WINDOWS_PATHS=1
APPLICATION_PATH=//C/path_from_C_to_the_folder_you_want_to_share_on_the_volume
/!\ you need to set COMPOSE_CONVERT_WINDOWS_PATHS to 1!
6. Start your Compose
In the Docker Quickstart Terminal:
Go to your Docker Compose folder, then start it:
cd /path_to_your_compose_folder
docker-compose up
Why have I to do that? It's so complicated!
The Docker technology rely on Linux namespaces. Without Linux, it can't work. To allow use of Docker on a Windows, Docker needs to install a Linux virtual machine. All the containers will run inside it.
The default virtual machine is now created and running within Virtualbox, that's why you have to share your folders using Virtualbox.
After sharing, the default virtual machine will have a mounted folder in it with a custom name (in the above example, it's C but it could be elephant or whatever).
Finally, Docker will mount volumes from the default virtual machine to the container: you have to use the name of the default machine shared folder in your volume declaration (in the above example, it's C but it could be elephant or whatever).
I'm currently trying to use variable substitution in a docker-compse.yml file. This file contains the following:
jenkins:
image: "jenkins:${JENKINS_VERSION}"
external_links:
- mongodb:mongo
ports:
- 8000:8080
The image below shows what happens when I try to start everything up.
As you can see, docker-compose shows a warning saying that the variable is not set. I suspect this is caused due to the use of sudo to start docker-compose. My setup (a Jenkins docker container which has access to docker and docker-compose via volume mounts) currently requires the use of sudo. Would it be better to stop docker requiring sudo, or is there another way to fix this without changing the current setup?
sudo -E preserve the user environment when running the command. It should do what you want.