My server is having some weird bug I cannot find so I just deleted all docker images and downloaded them again. Weirdly the same bug now also appears in the updated version of the server. My hunch is, that docker does not download the exact same images but rather some updated versions which cause this bug.
Question is, how do I force docker to use the exact same versions as before?
Looking at my docker-compose.yml I can see that rabbitmq and mongo have differen "created" dates although their version number is specified in the docker-compose file:
services:
messageq:
image: rabbitmq:3
container_name: annotator_message_q
environment:
- RABBITMQ_DEFAULT_USER=user
- RABBITMQ_DEFAULT_PASS=password
networks:
- cocoannotator
database:
image: mongo:4.0
container_name: annotator_mongodb
restart: always
environment:
- MONGO_DATA_DIR=/data/db
- MONGO_LOG_DIR=/dev/null
volumes:
- "mongodb_data:/data/db"
command: "mongod --smallfiles --logpath=/dev/null"
Is the specification rabbitmq:3 and mongo:4.0 not specific enough?
Is the specification rabbitmq:3 and mongo:4.0 not specific enough?
It is not. Tags are mutable in Docker Hub and in other docker registries by default. This means, that you may have unlimited number of actual images - all registered as rabbitmq:3.
Full proof specific version variant is to use sha256 digests. This is the only recommended way for live systems. I.e. instead of rabbitmq:3, use
rabbitmq:3#sha256:fddabeb47970c60912b70eba079aae96ae242fe3a12da3f086a1571e5e8c921d
Unfortunately for your case, if you have already deleted all your images, you may not be able to recover what was the exact version. If you still have them somewhere, then do something like docker images | grep rabbitmq and then docker image inspect on matching images to find out their sha256 digests.
Related
I have a compose.yml like this one:
version: '3.7'
services:
nginx:
restart: unless-stopped
image: ghcr.io/user/frontend:latest
ports:
- 80:80
depends_on:
- backend
backend:
restart: unless-stopped
image: ghcr.io/user/backend:latest
entrypoint: /home/app/web/wsgi-entrypoint.sh
expose:
- 8000
We have 2 images stored on Github: frontend and backend.
My goal is the following: when an image has been updated on the Github Docker Registry, I'd like to automatically update the image on the server and launch the new one substituting the old one via docker-compose.
For example: I have a running compose made by frontend and backend, but I just pushed a new image: ghcr.io/user/frontend:latest.
Now, I want a single command which updates only the images that have been changed (in this case ghcr.io/user/frontend:latest) and when I reload the frontend webpage I see the changes.
My attempt is the following:
docker-compose up -d --build
But the system says:
compose-backend_1 is up-to-date
compose-nginx_1 is up-to-date
which is not true!
So, the working procedure I use is a bit manual:
docker pull ghcr.io/user/frontend:latest
I see in the console: Status: Downloaded newer image,
which is the proof that a new image has been downloaded.
Then, if I relaunch the same command the console displays: Status: Image is up to date for ghcr.io/user/frontend:latest
Finally:
docker-compose up -d --build
says: Recreating compose-nginx_1 ... done
I suppose the command docker-compose up -d --build ALONE is not looking for new images and so does not update the image that is changed.
So, is there a SINGLE specific command to fix this?
Should be achieved by running docker-compose pull, and then docker-compose up -d
Or, shorter: docker-compose up -d --pull always
You can use variable substitution in many places in a docker-compose.yml file, in particular including the image:. If you give every build a unique tag, then you can supply the tag as an environment variable, and it will work the way you describe.
Let's say the two images have the same tagging scheme (they don't necessarily need to). You could update the Compose file to say
version: '3.8'
services:
nginx:
restart: unless-stopped
image: ghcr.io/user/frontend:${TAG:-latest} # <--
ports:
- 80:80
depends_on:
- backend
backend:
restart: unless-stopped
image: ghcr.io/user/backend:${TAG:-latest} # <--
Notice the $TAG reference at the end of the image: lines. If TAG isn't set in the environment, it will use latest, but if it is, it will use that exact build.
Now, if I run:
TAG=20221020 docker-compose up -d
For both containers, Compose will notice that they're running an older build, automatically pull the updated image from GitHub, and restart both containers against the newer image.
This brings the mild complication of your continuous-deployment system needing to know the current image tag. In exchange, though, you get the ability to very easily roll back – if today's build turns out to have a critical bug you can run the exact same command with a different tag to redeploy on yesterday's build. A setup like this is also necessary if you're considering migrating to Kubernetes, since it depends on the text of the image: string changing to trigger a redeployment.
I am using an official Postgres12 image that I'm pulling inside the docker-compose.yml. Everything is working fine.
services:
db:
container_name: db
image: postgres:12
volumes:
- ...
ports:
- 5432:5432
environment:
- POSTGRES_USER=...
Now, when I run docker-compose up, I get this image
My question is: is there a way in which I can rename the image inside docker-compose.yml? I know there is a command but I require it to be everything inside the file if possible.
Thanks!
In a Compose file, there's no direct way to run docker tag or any other command that modifies some existing resource.
If you're trying to optionally point Compose at a local mirror of Docker Hub, you can take advantage of knowing the default repository is docker.io and use an optional environment variable:
image: ${REGISTRY:-docker.io}/postgres:latest
REGISTRY=docker-mirror.example.com docker-compose up
Another possible approach is to build a trivial image that doesn't actually extend the base postgres image at all:
build:
context: .
dockerfile: Dockerfile.postgres
image: local-images.example.com/my-project/postgres
# Dockerfile.postgres
FROM postgres:latest
# End of file
There's not really any benefit to doing this beyond the cosmetic appearances in the docker images output. Having it be clear that you're using a standard Docker Hub image could be slightly preferable; its behavior is better understood than something you built locally and if you have multiple projects running at once they can more obviously share the same single image.
I'm developing locally on a Windows 10 PC, and have Docker images installed on drive D.
Running the command 'docker images' shows...
When I run a 'docker-compose up' command I'm getting the following error...
Pulling eis-config (eis/eis-config:)...
ERROR: The image for the service you're trying to recreate has been removed. If you continue, volume data could be lost. Consider backing up your data before continuing.
Continue with the new image? [yN]
Any idea why this is happening? (Could it be that the docker-compose is looking for the images on docker-hub, rather than locally?
The 'docker-compose.yml' file is shown below...
version: "3.7"
services:
eis-config:
image: eis/eis-config
ports:
- "8001:8001"
eis-eureka:
image: eis/eis-eureka
ports:
- "8761:8761"
depends_on:
- eis-config
eis-zuul:
image: eis/eis-zuul
ports:
- "8080:8080"
depends_on:
- eis-eureka
gd-service:
image: eis/gd-service
ports:
- "8015:8015"
depends_on:
- eis-eureka
run
docker-compose kill
docker-compose down
docker-compose up
should fix your issue, most likely you have an old container (running or not) that's causing the problem
You are running eis/eis-config without an Image tag, so latest is implicitely assumed by docker. You don't have an Image eis/eis-config with latest tag. So either build your Image with latest tag, or run image: eis/eis-config:0.0.1-SNAPSHOT
It seems like you missed an entry for the eis/eis-config service in the yml file, once check your yml file and regenerate the image for that service.
Where are you trying to run those images, locally in your machine or on a remote server?
once look at this link Error running containers with docker-compose, error pulling images from my private repo in dockerhub
I am trying to deploy a Docker Registry with custom storage location. The container runs well but I see no file whatsoever at the specified location. Here is my docker-compose.yaml:
version: "3"
services:
registry:
image: registry:2.7.1
deploy:
replicas: 1
restart_policy:
condition: always
ports:
- "85:5000"
volumes:
- "D:/Personal/Docker/Registry/data:/var/lib/registry"
For volumes, I have tried:
"data:/var/lib/registry"
./data:/var/lib/registry
"D:/Personal/Docker/Registry/data:/var/lib/registry"
The yaml file and docker-compose up is run at D:\Personal\Docker\Registry. I tried to push and pull an image to the localhost:85, everything works well, so it must store the data somewhere.
Please tell me where I did wrong.
I solved it but for my very specific case and with a different image, so I will just post here in case someone like me needs it. This question still need answer for the official Docker image.
I have just realized the image is for Linux only and turn out I couldn't run it on Windows Server so I switched to stefanscherer/registry-windows image. I changed the volumes declarations to:
volumes:
- ./data:c:\registry
- ./certs:c:\certs
Both the storage and the certs works correctly. I am not sure how to fix it on Linux though, as I have never used Linux before.
Assuming i have created a DJANGO project using docker-compose as per the example given in https://docs.docker.com/compose/django/
Below is a simple docker-compose.yml file for understanding sake
docker-compose.yml
version: '3'
services:
db:
image: postgres
web:
image: python:3
command: python3 manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
Here i am using two images python:3 (called "web") and postgres (called "db") which are automatically found from hub.docker.com and build accordingly. We also want web container depends on db container. Just to recollect whats in the docker-compose.yml above
Once i set everything i do docker-compose up and we can see two containers are running and django project is running on my local machine.
Once i have worked with my django application now i want to deploy on the production server.
So how to copy the images to the development server so that i am working on the same docker images again there also.
Because i try to create a docker-compose.yml file at production server thn there will be chance that the db image and web image may change.
Like:
When I build the postgres image on my development computer say i have postgres version 9.5
But If i again build the postgres image on the production server then i may have postgres version 10.1 installed.
SO i will not be working on the same environment, may be on the same os but not the same version of packages.
So how to check this when i am shifting things to development
Partially Solved:
As per the answer of #Yogesh_D,
If i am using prebuilt images from Dokcer hub, we can easily get the same environment on the production server using the version number like postgres:9.5.1 or python:3.
Partially UnSolved:
But If i created an image on my own using my own Dockerfile and then tagged it while building. Now i want to use the same image in production how to do that. Since its not on the Docker Hub and also i may not intereseted to put it on Docker hub.
So will copying manually my image to the production server is a good idea or i just the copy the Dockerfile and again build the image there on the production server.
This should be fairly simple as the docker compose image directive allows you to even specify the tags for that image.
So in your case it would be something like this:
version: '3'
services:
db:
image: postgres:9.5.14
web:
image: python:3.6.6
command: python3 manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
The above file ensures that you get the 9.5.14 version of postgres and 3.6.6 version of python.
And no matter where you deploy this is exactly what you get.
Look at https://hub.docker.com//python for all the tags/versions available in the python image and look at https://hub.docker.com//postgres to figure out all the versions/tags available for the postgres images.
Edit:
To solve the problem of custom images you have a few ways:
a. depending on where you deploy (your datacenter vs public cloud providers) you can start you own docker image registry. And there are lot of options here like this
b. If you are running in one of the popular cloud providers like aws, gcp, azure, most of them provide their own registries
c. Or you can use docker hub to setup private repositories.
And each of them support tags, so still use tags to ensure your own custom images are deployed just like the public images.