Restart Docker Container with new Image - docker

One of my Docker Containers can update itself (talking to the Docker Daemon using the Spotify Docker Client). After downloading the new image a container restart is required, with the new iamge of course.
If I just kill the running process inside the container, Docker restarts it using the old image. Is there any reliable way to force recreating the container using the new image? Couldn't find anything in the docker-compose docs. It's a single host environment only, no Kubernetes or something like that in use.
Compose file snippet:
dockerctl:
image: myimage
container_name: dockerctl
networks:
- mynetwork
ports:
- "8099:8080"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
restart: always

Bit of an old question, but this should do it: Update the image name in your docker-compose.yml and restart it by running docker-compose up -d --no-deps dockerctl.

Related

Docker compose up before build [duplicate]

How to access the running containers during new container docker build?
Need to access the database container during the build of the application container
docker-compose
version: '3'
services:
db:
build: ./db
ports:
- 1433:1433
networks:
- mynetwork
app:
build: ./app
ports:
- 8080:8080
depends_on:
- db
networks:
- mynetwork
networks:
mynetwork: {}
Tried to bring up the db prior to building the app container, but not working:
docker-compose build db
docker-compose up -d db
docker-compose build app
You can't, and it's not a good idea. For example, if you run:
docker-compose build
docker-compose down -v
docker-compose up
The down step will delete all of the containers and their underlying storage (including the contents of the database); then the up step will create all new containers from existing images without re-running the Dockerfile. Even if you added a --build option, Docker's layer caching would conclude that the filesystem output of your database setup command hasn't changed, and will skip re-running that step.
You can encounter a similar problem if you docker push the built image to some registry and run it on a different host: since the image is reusable, commands from its Dockerfile won't get re-run, but it's not the same database, so the setup won't get done.
Depending on what kind of setup you're trying to do, probably the best approach is to configure your image with an entrypoint script that runs your application's database migrations, then exec "$#" runs the main container command. It can also work to put setup commands in the database's /docker-entrypoint-initdb.d directory, though these won't get re-run if your application's database schema changes.
At a technical level, this doesn't work because the docker build environment isn't on any particular Docker network, neither the mynetwork you manually specify nor the default network Compose creates on its own. The build sequence runs separately from running the resulting image, and it ignores most of the Docker Compose settings.

Docker container log does not appear anymore on Docker compose log

I'm running my Docker container through my Docker compose, but when my container stops and it restarts again, the log does not appear anymore related to this restarted container.
Would anyone know how to fix it?
I send below the docker compose command and the file for analysis.
Thank you in advance.
Command to start the compose
docker-compose -f docker-compose.dev.yml up
Docker compose
version: '3'
services:
ms3_executive_back:
image: ms3_executive_backend
ports:
- "5001:5001"
volumes:
- ./executive_backend:/app
restart: always
If you want to inspect the logs to determine cause of failure, you can try setting your restart: "no". This will ensure that docker-compose does not automatically restart your container and overwrite the existing logs.

docker-compose up not recreate container

I create two containers, one is an oracle db and one is an apache tomcat.
I run both of them using the following docker compose:
version: '3.4'
services:
tomcat:
build: ./tomcat/.
ports:
- "8888:8080"
- "59339:59339"
depends_on:
- oracle
volumes:
- ./tomcat/FILES:/usr/test/FILES
- ./ROOT.war:/opt/tomcat/webapps/ROOT.war
expose:
- "8888"
- "59339"
oracle:
build: ./database/.
ports:
- "49161:1521"
environment:
- ORACLE_ALLOW_REMOTE=true
expose:
- "49161"
I use the command docker-compose up that in according with the documentation it must be recreate the container.
But in reality it start only the old containers (same containers ID) with the state of the containers when it was stoped, this is a problem because I use it for testing and I want to start from a clean situation (ROOT.war must be deployed every time i run the command).
It is normal or I miss something.
I'm using docker for windows 18.06.1-ce and Compose 1.22.0
UPDATE
So is not true that up recreate container but do it only if something changed?
I also see docker-compose down that remove the container and force up to recreate them, is the right approch?
The things that I not uderstand is why the status of the container was saved every time i stoped it (file app.pid create by tomcat still present after a simple up without a previous down)
docker-compose starts and stops containers, if you want to recreate them every time you have to pass the --force-recreate flag as per the docs.
Yes, this is as expected.
Sounds like you want to do a restart:
docker-compose restart
or to force a rebuild:
docker-compose --build start
--force-recreate will recreate the contianers
From Docs
--force-recreate => Recreate containers even if their configuration and image haven't
changed.
docker-compose up -d --force-recreate

I want to run a docker-compose.yml on a remote docker daemon, what about volumes?

I want to run docker-compose up on a remote docker daemon:
DOCKER_HOST=tcp://...:2375 docker-compose up
In docker-compose.yml, I have a volume binding to a local file:
version: "3"
services:
nginx:
image: nginx:latest
ports:
- 80:80
volumes:
- ./etc/nginx/nginx.conf:/etc/nginx/nginx.conf:ro
This won't work, as the remote docker daemon will be unable to locate ./etc/nginx/nginx.conf.
What is the best approach to handle this?
Extend the existing docker image by creating your own docker image.
Ref : How to extend existing docker container?
Copy the relevant files (from docker build-context) to appropriate directory and then it will be available in docker image and hence will also be available in remote docker demon as well.

Multiple docker images run from docker file

I am trying to execute multiple docker images run from single docker file with different ports.
Please advise How to execute multiple "docker run" commands from single docker file with different ports.
You want to use docker-compose it sounds like. Here is an example using nginx and redis (It's how I do it anyway)
services:
nginx:
image: nginx
ports:
- "80:80"
redis:
image: redis
ports:
- "1000:1000"
So as you can see, if I run docker-compose up, docker will spin up two containers, nginx and redis, each running off of a different port! If you don't want to you docker-compose, you can do it from docker run
docker run --name nginx -p 1000:10001
docker run --name redis -p 3333:2423
I don't 100% understand your question, but I hope this helps!

Resources