I'm developing locally on a Windows 10 PC, and have Docker images installed on drive D.
Running the command 'docker images' shows...
When I run a 'docker-compose up' command I'm getting the following error...
Pulling eis-config (eis/eis-config:)...
ERROR: The image for the service you're trying to recreate has been removed. If you continue, volume data could be lost. Consider backing up your data before continuing.
Continue with the new image? [yN]
Any idea why this is happening? (Could it be that the docker-compose is looking for the images on docker-hub, rather than locally?
The 'docker-compose.yml' file is shown below...
version: "3.7"
services:
eis-config:
image: eis/eis-config
ports:
- "8001:8001"
eis-eureka:
image: eis/eis-eureka
ports:
- "8761:8761"
depends_on:
- eis-config
eis-zuul:
image: eis/eis-zuul
ports:
- "8080:8080"
depends_on:
- eis-eureka
gd-service:
image: eis/gd-service
ports:
- "8015:8015"
depends_on:
- eis-eureka
run
docker-compose kill
docker-compose down
docker-compose up
should fix your issue, most likely you have an old container (running or not) that's causing the problem
You are running eis/eis-config without an Image tag, so latest is implicitely assumed by docker. You don't have an Image eis/eis-config with latest tag. So either build your Image with latest tag, or run image: eis/eis-config:0.0.1-SNAPSHOT
It seems like you missed an entry for the eis/eis-config service in the yml file, once check your yml file and regenerate the image for that service.
Where are you trying to run those images, locally in your machine or on a remote server?
once look at this link Error running containers with docker-compose, error pulling images from my private repo in dockerhub
Related
I have a compose.yml like this one:
version: '3.7'
services:
nginx:
restart: unless-stopped
image: ghcr.io/user/frontend:latest
ports:
- 80:80
depends_on:
- backend
backend:
restart: unless-stopped
image: ghcr.io/user/backend:latest
entrypoint: /home/app/web/wsgi-entrypoint.sh
expose:
- 8000
We have 2 images stored on Github: frontend and backend.
My goal is the following: when an image has been updated on the Github Docker Registry, I'd like to automatically update the image on the server and launch the new one substituting the old one via docker-compose.
For example: I have a running compose made by frontend and backend, but I just pushed a new image: ghcr.io/user/frontend:latest.
Now, I want a single command which updates only the images that have been changed (in this case ghcr.io/user/frontend:latest) and when I reload the frontend webpage I see the changes.
My attempt is the following:
docker-compose up -d --build
But the system says:
compose-backend_1 is up-to-date
compose-nginx_1 is up-to-date
which is not true!
So, the working procedure I use is a bit manual:
docker pull ghcr.io/user/frontend:latest
I see in the console: Status: Downloaded newer image,
which is the proof that a new image has been downloaded.
Then, if I relaunch the same command the console displays: Status: Image is up to date for ghcr.io/user/frontend:latest
Finally:
docker-compose up -d --build
says: Recreating compose-nginx_1 ... done
I suppose the command docker-compose up -d --build ALONE is not looking for new images and so does not update the image that is changed.
So, is there a SINGLE specific command to fix this?
Should be achieved by running docker-compose pull, and then docker-compose up -d
Or, shorter: docker-compose up -d --pull always
You can use variable substitution in many places in a docker-compose.yml file, in particular including the image:. If you give every build a unique tag, then you can supply the tag as an environment variable, and it will work the way you describe.
Let's say the two images have the same tagging scheme (they don't necessarily need to). You could update the Compose file to say
version: '3.8'
services:
nginx:
restart: unless-stopped
image: ghcr.io/user/frontend:${TAG:-latest} # <--
ports:
- 80:80
depends_on:
- backend
backend:
restart: unless-stopped
image: ghcr.io/user/backend:${TAG:-latest} # <--
Notice the $TAG reference at the end of the image: lines. If TAG isn't set in the environment, it will use latest, but if it is, it will use that exact build.
Now, if I run:
TAG=20221020 docker-compose up -d
For both containers, Compose will notice that they're running an older build, automatically pull the updated image from GitHub, and restart both containers against the newer image.
This brings the mild complication of your continuous-deployment system needing to know the current image tag. In exchange, though, you get the ability to very easily roll back – if today's build turns out to have a critical bug you can run the exact same command with a different tag to redeploy on yesterday's build. A setup like this is also necessary if you're considering migrating to Kubernetes, since it depends on the text of the image: string changing to trigger a redeployment.
I am trying to upgrade a NXRM3 repository which is running on a docker container with a persistent volume attached to it. The existing docker container is a custom built image by adding couple of plugins through Dockerfile. I want to build the latest version image with those newer version plugins and run NXRM3 on the updated version, but how do i use the same volume with the new container? Can i attach the volume to the new container and does that work? Any help regarding the safest process is much appreciated. Thanks in advance.
Below is the docker-compose file for the existing version:
services:
nexus:
container_name: nexus
build: .
ports:
- "8080:8080"
- "8081:8081"
- "8082:8082"
volumes:
- "nexus-data:/nexus-data"
restart: unless-stopped
volumes:
nexus-data:
The volume exists independently of the container. So just create the new image and create a new container based on it with the original volume attached. To be completely on the safe side you can make a backup of the volume.
In case you keep the images in Nexus as well, be careful to have it available on the host before you bring down the old Nexus container.
I'm trying to write a docker-compose file that will build and push a versioned (1.0, 1.1...) and latest build of my image to my local v2 docker registry. However when I run docker-compose build I get the following error:
ERROR: Couldn't connect to Docker daemon - you might need to run `docker-machine start default`.
I found a lot of people complaining about this error for many different reasons, in my case it has nothing to do with permissions or weather or not the docker service is running, I narrowed it down to my image naming having a URL on it (the URL of my local registry), I know that because if I name my image normally (like '/app:latest'), then the commands runs fine. So how can I have a URL as the image name?
Here is what I'm trying to do (docker-compose.yaml):
version: "3.8"
x-marvin-backend: &default-marvin-backend
container_name: marvin_backend
build: ./marvin-api
image: "http://my_registry_url:5000/marvin/backend:latest"
ports:
- "3000:3000"
networks:
- backend
x-marvin-frontend: &default-marvin-frontend
container_name: marvin_frontend
image: http://my_registry_url:5000/marvin/frontend:latest
build:
context: ./marvin-front
args:
- REACT_APP_SERVICES_HOST=http://marvin_backend:3000/
ports:
- "80:80"
networks:
- backend
depends_on:
- backend
services:
backend: *default-marvin-backend
backend_versioned:
<< : *default-marvin-backend
image: http://my_registry_url:5000/marvin/backend:1.0
frontend: *default-marvin-frontend
frontend_versioned:
<< : *default-marvin-frontend
image: http://my_registry_url:5000/marvin/frontend:1.0
networks:
backend:
I'm new to docker in general, my main goal here is to have a simple, preferably one command (e.g docker-compose build), that will build and tag both my front end and back end images so that I can just execute docker-compose push to push those newly created images to my registry running on AWS. With that I also want to be able to override the latest version of those images in the registry while also adding a versioned image for backup purposes, in case I want to revisit any of those version in the future.
Then in the AWS EC2 machine I have another docker-compose.yaml file that just fetches the latest versions of both images and run their containers.
So to summarize I would develop the application on my local machine, then add the new version manually to the versioned services in the local docker-compose.yaml file, then run docker-compose build followed by docker-compose push; then ssh into my AWS machine and run docker-compose up to fetch the latest and newly updated images and run them.
This could later evolve into a CI/CD pipeline, but right now I'm taking baby steps and trying to get my image name to have a URL in it.
Thank you.
Edit
I tried using a .env with REGISTRY=http://my_registry_url:5000/marvin and then using image: "${REGISTRY}/frontend:latest" or image: "$${REGISTRY}/frontend:latest" but that also didn't work
Just remove the http:// part from your images.
I have a docker-compose.yml file which takes the image svka4019/notes2:latest from the docker hub.
However, if I change the image build it and push it, when I run docker-compose it just uses the one it has already downloaded before.
Here is the docker-compose.yml:
springboot-docker-compose-app-container:
image: svka4019/notes2:latest
ports:
- "80:5001"
depends_on:
- friendservice
networks:
- mynet
container_name: base_notes
friendservice:
build: ./Pirmas
command: python app.py
ports:
- 5000:5000
container_name: friend
networks:
- mynet
networks:
mynet:
And the command I use for building and running: docker-compose up --build -d.
For updating the image in docker-hub I use:
docker build -t svka4019/notes2
docker push svka4019/notes2
If I use methods as no-cache it just rebuilds friendService container and skips the base one.
As #DazWilkin pointed out in the comments, using latest tag should be used carefully. Not only can it introduce bugs in your app if latest comes with BC breaks, but it also doesn't indicate that a new update must be performed on your machine if you already have an image 'latest'.
In your case, what you have to do should you want to keep using latest, is to simply call:
docker-compose pull
In case you are building your own image, then you must do:
docker-compose build --pull
The latter will tell docker-compose to first pull the base image before building your custom image.
I am trying to deploy a Docker Registry with custom storage location. The container runs well but I see no file whatsoever at the specified location. Here is my docker-compose.yaml:
version: "3"
services:
registry:
image: registry:2.7.1
deploy:
replicas: 1
restart_policy:
condition: always
ports:
- "85:5000"
volumes:
- "D:/Personal/Docker/Registry/data:/var/lib/registry"
For volumes, I have tried:
"data:/var/lib/registry"
./data:/var/lib/registry
"D:/Personal/Docker/Registry/data:/var/lib/registry"
The yaml file and docker-compose up is run at D:\Personal\Docker\Registry. I tried to push and pull an image to the localhost:85, everything works well, so it must store the data somewhere.
Please tell me where I did wrong.
I solved it but for my very specific case and with a different image, so I will just post here in case someone like me needs it. This question still need answer for the official Docker image.
I have just realized the image is for Linux only and turn out I couldn't run it on Windows Server so I switched to stefanscherer/registry-windows image. I changed the volumes declarations to:
volumes:
- ./data:c:\registry
- ./certs:c:\certs
Both the storage and the certs works correctly. I am not sure how to fix it on Linux though, as I have never used Linux before.