I've been running a dev-setup for a while without issue. I'm using Docker for Windows with Windows Subsystem for Linux 2. It's been working very well. Today when trying to spin up docker-compose, it failed with the following error:
frederik#desktop:~/projects/caselab$ docker-compose -f docker-test.yml up
Recreating f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_caselab_db_1 ...
Recreating f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_caselab_db_1 ... error
ERROR: for f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_f26a365c8a83_caselab_db_1 Cannot create container for service db: mkdir 07ff2055c618dedc240ca3275de3f8c41d091136dc659cf463ee9fc62eed1853: permission denied
ERROR: for db Cannot create container for service db: mkdir 07ff2055c618dedc240ca3275de3f8c41d091136dc659cf463ee9fc62eed1853: permission denied
ERROR: Encountered errors while bringing up the project.
frederik#desktop:~/projects/caselab$
I shaved the contents of docker-test.yml down to simply:
version: '3'
services:
db:
image: postgres
logging:
driver: none
I tried running docker run postgres which worked without issue. I then tried copying all the contents of my folder to another folder. Now, running docker-compose -f docker-test.yml works without issues.
I think it's somehow related to permissions, though I can see no difference in permissions between the original folder and the new one.
As I do most of my editing in Visual Studio Code, running in Windows I'm thinking it may be related to the Windows / Linux boundary, though I'm not completely sure how. And - again - this setup has been running for months without issue so I'm at a loss for what I could have changed.
Any ideas?
I managed to solve it.
I noticed that running docker-compose up prepended a hash to the image name every single time the command was run. This resulted in a comically long image name.
Running docker-compose images showed this image being present.
Simply running docker-compose rm removed the image, which allowed the right image to be created and run.
I have filed this as a bug in docker-compose.
Related
As title says I was renaming one of my docker images I built using Dockerfile and docker-compose.yml using the command docker tag old-image-name new-image-name, after that I used docker images to check on my current images and I had BOTH the old and the new one.
I removed the old one using docker image rm IMAGE_ID and since then I've been getting the following error failed to solve: rpc error: code = Unknown desc = failed to solve with frontend dockerfile.v0: failed to read dockerfile: open /var/lib/docker/tmp/buildkit-mount511639725/Dockerfile: no such file or directory when I try to start the container.
I've tried everything, other containers start without problem and I've successfully ran this container in the past. This are the only changes I've made, nothing changed on my Dockerfile or docker-compose.yml.
I've tried removing the images related to this stack to build again many times. also tried rebuilding the image with Dockerfile alone (not trough docker-compose.yml).
This error usually means it cannot find your docker file. It usually happens when its named incorrectly, make sure your docker file is named exactly as "Dockerfile"
I solved the problem, and it pains me that was very dumb on my side.
When I had the container running I was adding some network changes on my docker-compose.yml to use NGINX with the django project.
When I did those changes I changed build: . to build: ~/path/to/folder, I thought it would be able to recognize that path but apparently it has to be absolute or use the .
Solution: reverting to build: . or build: /home/your_user/path/to/folder asuming your folder is in /home/your_user/*, avoid using ~
Thanks for the replies everyone
I am bringing up my project dependencies using docker-compose. So far this used to work
docker-compose up -d --no-recreate;
However today I tried running the project again after couple of weeks and I was greeted with error message
Creating my-postgres ... error
ERROR: for my-postgres Cannot create container for service postgres: b'Conflict. The container name "/my-postgres" is already in use by container "dbd06bb1d99eda6f075ea688df16e8b355e559e1759f084dee8f3cddfc535b0b". You have to remove (or rename) that container to be able to reuse that name.'
ERROR: for postgres Cannot create container for service postgres: b'Conflict. The container name "/my-postgres" is already in use by container "dbd06bb1d99eda6f075ea688df16e8b355e559e1759f084dee8f3cddfc535b0b". You have to remove (or rename) that container to be able to reuse that name.'
ERROR: Encountered errors while bringing up the project.
My docker-compose.yml file is
postgres:
container_name: my-postgres
image: postgres:latest
ports:
- "15432:5432"
Docker version is
Docker version 19.03.1, build 74b1e89
Docker compose version is
docker-compose version 1.24.1, build 4667896b
Intended behavior of this call is to:
make the container if it does not exist
start the container if it exists
just chill and do nothing if the container is already started
Docker Compose normally assigns a container name based on its current project name and the name of the services: block. Specifying container_name: explicitly overrides this; but, it means you can’t launch multiple copies of the same Compose file with different project names (from different directories) because the container name you’ve explicitly chosen won’t be used.
You almost never care what the container name is explicitly. It only really matters if you’re trying to use plain docker commands to manipulate Compose-managed containers; it has no impact on inter-service communication. Just delete the container_name: line.
(For similar reasons you can almost always delete hostname: and links: sections if you have them with no practical impact on your overall system.)
In my case I moved the project in an other directory.
When I tryed to run docker-compose up it failed because of some conflicts.
With command docker system prune I resolved them.
It's caused by being in a different directory than when you last ran docker-compose up. One option is to change back to the original directory. Or if you've configured it as a systemd service you can use systemctl.
Well...the error message seems pretty straightforward to me...
The container name "/my-postgres" is already in use by container
If you just want to restart where you left, you should use docker-compose start.
Otherwise, just clean up your workspace before running it :
docker-compose down
docker-compose up -d
Remove --no-recreate flag from your docker-compose command. And execute the command again.
$docker-compose up -d
--no-recreate is using for preventing accedental updates.
If there are existing containers for a service, and the service’s configuration or image was changed after the container’s creation, docker-compose up picks up the changes by stopping and recreating the containers. To prevent Compose from picking up changes, use the --no-recreate flag.
official docker docs.Link
I had similar issue
dcdown --remove-orphans
That worked for me.
I have a raspberry pi and I have installed dockers in it. I have made a python script to read gpio status in it. So when I run the below command
sudo docker run -it --device /dev/gpiomem app-image
It runs perfectly and shows the gpio status. Now I have created a docker-compose.yml file as I want to deploy this app.py to the swarm cluster which I have created.
Below is the content of docker-compose.yml
version: "3"
services:
app:
image: app-image
deploy:
mode: global
restart: always
privileged: true
When I start the deployment using sudo docker stack deploy command, the image is deployed but it gives error:
No access to /dev/mem. Try running as root
So it says that it do not have access to /dev/mem, but this is very strange when I am using device, why the service do not have access. It also says trying running as root which I think all the containers are in root already. I also tried giving the full permissions to the file by including the command chmod 777 /dev/gpiomem in the code but it still shows this error.
My main question is that when it runs normally using docker run.. command why it is showing error in docker-compose file when deploying using sudo docker stack deploy.? How to resolve this issue.?
Thanks
Adding devices, capabilities, and using privileged mode are not supported in swarm mode. Those options in the yml file exist for using docker-compose instead of docker stack deploy. You can track the progress on getting these features added to swarm mode in github issue #24862.
Since all you need to do is access a device, you may have luck adding the file for the device as a volume, but that's a shot in the dark:
volumes:
- /dev/gpiomem:/dev/gpiomem
As stated in docker-compose devices
Note: This option is ignored when deploying a stack in swarm mode with
a (version 3) Compose file.
The devices option is ignored in swarm. You can use privileged: true which will give access to all devices.
I'm trying to deploy an app that's built with docker-compose, but it feels like I'm going in completely the wrong direction.
I have everything working locally—docker-compose up brings up my app with the appropriate networks and hosts in place.
I want to be able to run the same configuration of containers and networks on a production machine, just using a different .env file.
My current workflow looks something like this:
docker save [web image] [db image] > containers.tar
zip deploy.zip containers.tar docker-compose.yml
rsync deploy.zip user#server
ssh user#server
unzip deploy.zip ./
docker load -i containers.tar
docker-compose up
At this point, I was hoping to be able to run docker-compose up again when they get there, but that tries to rebuild the containers as per the docker-compose.yml file.
I'm getting the distinct feeling that I'm missing something. Should I be shipping over my full application then building the images at the server instead? How would you start composed containers if you were storing/loading the images from a registry?
The problem was that I was using the same docker-compose.yml file in development and production.
The app service didn't specify a repository name or tag, so when I ran docker-compose up on the server, it just tried to build the Dockerfile in my app's source code directory (which doesn't exist on the server).
I ended up solving the problem by adding an explicit image field to my local docker-compose.yml.
version: '2'
services:
web:
image: 'my-private-docker-registry:latest'
build: ./app
Then created an alternative compose file for production:
version: '2'
services:
web:
image: 'my-private-docker-registry:latest'
# no build field!
After running docker-compose build locally, the web service image is built with the repository name my-private-docker-registry and the tag latest.
Then it's just a case of pushing the image up to the repository.
docker push 'my-private-docker-registry:latest'
And running docker pull, it's safe to stop and recreate the running containers, with the new images.
I have a docker-compose-staging.yml file which I am using to define a PHP application. I have defined a data volume container (app) in which my application code lives, and is shared with other containers using volumes_from.
docker-compose-staging.yml:
version: '2'
services:
nginx:
build:
context: ./
dockerfile: docker/staging/nginx/Dockerfile
ports:
- 80:80
links:
- php
volumes_from:
- app
php:
build:
context: ./
dockerfile: docker/staging/php/Dockerfile
expose:
- 9000
volumes_from:
- app
app:
build:
context: ./
dockerfile: docker/staging/app/Dockerfile
volumes:
- /var/www/html
entrypoint: /bin/bash
This particular docker-compose-staging.yml is used to deploy the application to a cloud provider (DigitalOcean), and the Dockerfile for the app container has COPY commands which copy over folders from the local directory to the volume defined in the config.
docker/staging/app/Dockerfile:
FROM php:7.1-fpm
COPY ./public /var/www/html/public
COPY ./code /var/www/html/code
This works when I first build and deploy the application. The code in my public and code directories are present and correct on the remote server. I deploy using the following command:
docker-compose -f docker-compose-staging.yml up -d
However, next I try adding a file to my local public directory, then run the following command to rebuild the updated code:
docker-compose -f docker-compose-staging.yml build app
The output from this rebuild suggests that the COPY commands were successful:
Building app
Step 1 : FROM php:7.1-fpm
---> 6ed35665f88f
Step 2 : COPY ./public /var/www/html/public
---> 4df40d48e6a5
Removing intermediate container 7c0fbbb7f8b6
Step 3 : COPY ./code /var/www/html/code
---> 643d8745a479
Removing intermediate container cfb4f1a4f208
Successfully built 643d8745a479
I then deploy using:
docker-compose -f docker-compose-staging.yml up -d
With the following output:
Recreating docker_app_1
Recreating docker_php_1
Recreating docker_nginx_1
However when I log into the remote containers, the file changes are not present.
I'm relatively new to Docker so I'm not sure if I've misunderstood any part of this process! Any guidance would be appreciated.
This is because of cache.
Run,
docker-compose build --no-cache
This will rebuild images without using any cache.
And then,
docker-compose -f docker-compose-staging.yml up -d
I was struggling with the fact that migrations were not detected nor done. Found this thread and noticed that the root cause was, indeed, files not being updated in the container. The force-recreate solution suggested above solved the problem for me, but I find it cumbersome to have to try to remember when to do it and when not. E.g. Vue related files seem to work just fine but Django related files don't.
So I figured why not try adjusting the Docker file to clean up the previous files before the copy:
RUN rm -rf path/to/your/app
COPY . path/to/your/app
Worked like a charm. Now it's part of the build and all you need is run the docker-compose up -d --build again. Files are up to date and you can run make migrations and migrate against your containers.
I had similar issue if not same while working on dotnet core application.
What I was trying to do was rebuild my application and get it update my docker image so that I can see my changes reflected in the containerized copy.
So I got going by removing the underlying image generated by docker-compose up using the command to get my changes reflected:
docker rmi *[imageId]*
I believe there should be support for this in docker-compose but this was enough for my need at the moment.
Just leaving this here for when I come back to this page in two weeks.
You may not want to use docker system prune -f in this block.
docker-compose down --rmi all -v \
&& docker-compose build --no-cache \
&& docker-compose -f docker-compose-staging.yml up -d --force-recreate
I had the same issue because of shared volumes. For me the solution was to remove shared container using this command:
docker volume rm [VOLUME_ID]
Volume id or name you can find in "Mount" section using this command:
docker inspect [CONTAINER_ID]
None of the above solutions worked for me, but what did finally work was the following steps:
Copy/Move file outside of docker app folder
Delete File you want to update
Rebuild the docker img without updated file
Move copied file back into docker app folder
Rebuild again the docker image
Now the image will contain the updates to the file.
I'm relatively new to Docker myself and found this thread after experiencing a similar issue with an updated YAML file not seeming to be copied into a rebuilt container, despite having turned off caching.
My build process differs slightly as I use Docker Hub's GitHub integration for automating image builds when new commits to the master branch are made. The build happens on Docker's servers rather than the locally built and pushed container image workflow.
What ended up working for me was to do a docker-compose pull to bring down into my local environment the most up-to-date versions of the containers defined in my .env file. Not sure if the pull command defers from the up command with a --force-recreate flag set, but I figured I'd share anyway in case it might help someone.
I'd also note that this process allowed me to turn auto-caching back on because the edited file was actually being detected by the Docker build process. I just wasn't seeing it because I was still running docker-compose up on outdated image versions locally.
I am not sure it is caching, because (a) it is usually noted in the build output, whether cache was used or not and (b) 'build' should sense the changed content in your directory and nullify the cache.
I would try to bring up the container on the same machine used to build it to see if that is updated or not. if it is, the changed image is not propagated. I do not see any version used in your files (build -t XXXX:0.1 or build -t XXXX:latest) so it might be that your staging machine uses a stale image. Or, are you pushing the new image so the staging server will pull it from somewhere?
You are trying to update an existing volume with the contents from a new image, that does not work.
https://docs.docker.com/engine/tutorials/dockervolumes/#/data-volumes
States:
Changes to a data volume will not be included when you update an image.