`docker-compose run --rm` does not remove depends_on services - docker

I have a docker-compose.yml file like this,
version: "3.9"
services:
db-test:
image: mariadb:10.6
profiles:
- test
be:
build:
context: .
dockerfile: test.Dockerfile
depends_on:
- db-test
volumes:
- gradle-cache:/home/gradle/.gradle
profiles:
- test
# volumes
volumes:
gradle-cache:
driver: local
This is for testing the image inside a specific environment. This is a one time task; so I use docker-compose run --profile test --rm be.
This runs perfectly fine.
My concern is that, after the run is over, the dependency service db-test is still running. be service is automatically deleted when using --rm option.
Is there a way to clean up everything? i.e. containers related to be and db-test services must be deleted after the run is complete. Like in case of docker-compose down.

I think my answer is late for you but it can be pretty useful for others on the Internet.
You run your command with the profile option: docker-compose run --profile test --rm be. So when you want to delete all containers related to the profile, you need to run docker down with the profile option too: docker-compose --profile test down.

Related

Docker compose up before build [duplicate]

How to access the running containers during new container docker build?
Need to access the database container during the build of the application container
docker-compose
version: '3'
services:
db:
build: ./db
ports:
- 1433:1433
networks:
- mynetwork
app:
build: ./app
ports:
- 8080:8080
depends_on:
- db
networks:
- mynetwork
networks:
mynetwork: {}
Tried to bring up the db prior to building the app container, but not working:
docker-compose build db
docker-compose up -d db
docker-compose build app
You can't, and it's not a good idea. For example, if you run:
docker-compose build
docker-compose down -v
docker-compose up
The down step will delete all of the containers and their underlying storage (including the contents of the database); then the up step will create all new containers from existing images without re-running the Dockerfile. Even if you added a --build option, Docker's layer caching would conclude that the filesystem output of your database setup command hasn't changed, and will skip re-running that step.
You can encounter a similar problem if you docker push the built image to some registry and run it on a different host: since the image is reusable, commands from its Dockerfile won't get re-run, but it's not the same database, so the setup won't get done.
Depending on what kind of setup you're trying to do, probably the best approach is to configure your image with an entrypoint script that runs your application's database migrations, then exec "$#" runs the main container command. It can also work to put setup commands in the database's /docker-entrypoint-initdb.d directory, though these won't get re-run if your application's database schema changes.
At a technical level, this doesn't work because the docker build environment isn't on any particular Docker network, neither the mynetwork you manually specify nor the default network Compose creates on its own. The build sequence runs separately from running the resulting image, and it ignores most of the Docker Compose settings.

Docker running old images not picking up changes

I have been stuck with docker not picking up any changes.
So I released my app few days ago as v1.0.0.0 and obviously since its a sort of pre release i still have some bug fixes to do currently i'm already at v1.0.5.0. But for some reason every time I deploy it seem to run an old image and not the new one with my bug fix in it.
Firstly i'm not using ci/cd pipelines for the moment just everything manually(still learning)
What I do to run:
So i run docker-compose -f docker-compose.yml -f docker-compose-override.yml up -d
this works
when I want to redeploy i overwrite my files by deleting them and putting my new files in the folder
i close my containers :
docker-compose -f docker-compose.yml -f docker-compose-override.yml down
and when everything is ready i'll start them again with:
docker-compose -f docker-compose.yml -f docker-compose-override.yml up -d
but for some reason when I do this it does not pickup the changes. I think it tries to load the old untagged image or something although when i run docker images list i get nothing
I can fix this kinda issue by deleting everything with:
docker rmi $(docker images -a -q)
this deletes everything and then i'll start my containers again and this start the new one.
But obviously i'm not going to remove all my images everytime.
my webapp consist of a api, frontend in blazor and mvc app all in .net5.0 and production database running on another vm.
my compose:
version: '3.4'
services:
portfoliorepositoryapi:
image: ${DOCKER_REGISTRY-}portfoliorepositoryapi:${TAG:-latest}
container_name: "PortfolioApi"
ports:
- "5100:80"
- "5101:443"
- "1433:1433"
build:
context: .
dockerfile: PortfolioRepositoryApi/Dockerfile
portfolio-frontend:
image: ${DOCKER_REGISTRY-}portfoliofrontend:${TAG:-latest}
container_name: "PortfolioFrontend"
ports:
- "5104:80"
- "5105:443"
build:
context: .
dockerfile: Portfolio-Frontend/Dockerfile
volumes:
- ./Portfolio-Frontend/Files:/app/Files
cmsapp:
image: ${DOCKER_REGISTRY-}cmsapp:${TAG:-latest}
container_name: "CMSAPP"
ports:
- "5102:80"
- "5103:443"
build:
context: .
dockerfile: CMSApp/Dockerfile
What do I need to do so it picks up the changes?
Try using --build option of docker compose up, as specified in the docs. There's also a --force-recreate option available, which you may or may not find useful depending on your workflow.
In your case, your command would be something similar to:
docker-compose -f docker-compose.yml -f docker-compose-override.yml up --build -d

Docker: How to update your container when your code changes

I am trying to use Docker for local development. The problem is that when I make a change to my code, I have to run the following commands to see the updates locally:
docker-compose down
docker images # Copy the name of the image
docker rmi <IMAGE_NAME>
docker-compose up -d
That's quite a mouthful, and takes a while. (Possibly I could make it into a bash script, but do you think that is a good idea?)
My real question is: Is there a command that I can use (even manually each time) that will update the image & container? Or do I have to go through the entire workflow above every time I make a change in my code?
Just for reference, here is my Dockerfile and docker-compose.yml.
Dockerfile
FROM node:12.18.3
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
EXPOSE 4000
CMD ["npm", "start"]
docker-compose.yml
version: "2"
services:
web:
build:
context: .
dockerfile: Dockerfile
container_name: web
restart: always
ports:
- "3000:3000"
depends_on:
- mongo
mongo:
container_name: mongo
image: mongo
volumes:
- ./data:/data/db
ports:
- "27017:27017"
Even though there are multiple good answers to this question, I think they missed the point, as the OP is asking about the local dev environment. The command I usually use in this situation is:
docker-compose up -d --build
If there aren't any errors in Dockerfile, it should rebuild all the images before bringing up the stack. It could be used in a shell script if needed.
#!/bin/bash
sudo docker-compose up -d --build
If you need to tear down the whole stack, you can have another script:
#!/bin/bash
sudo docker-compose down -v
The -v flag removes all the volumes so you can have a fresh start.
NOTE: In some cases, sudo might not be needed to run the command.
When a docker image is build the artifacts are already copied and no new change can reflect until you rebuild the image.
But
If it is only for local development, then you can leverage volume sharing to update code inside container in runtime. The idea is to share your app/repo directory on host machine with /usr/src/app (as per your Dockerfile) and with this approach your code (and new changes) will be appear on both host and the running container.
Also, you will need to restart the server on every change and for this you can run your app using nodemon (as it watches for changes in code and restarts the server)
Changes required in Dockerfile.
services:
web:
...
container_name: web
...
volumes:
- /path/in/host/machine:/usr/src/app
...
...
ports:
- "3000:3000"
depends_on:
- mongo
You may use Docker Swarm as an orchestration tool to apply rolling updates. Check Apply rolling updates to a service.
Basically you issue docker compose up once and do it with a shell script maybe, and once you get your containers running and then you may create a Jenkinsfile or configure a CI/CD pipeline to pull the updated image and apply it to running container with previous image with docker service update <NEW_IMAGE>.

Docker compose run multiple commands one after another

I recently started working with docker-compose and I am running into an issue where I won't get any helpful answers while googling. So what I want to do is running 2 commands on after another. First I want to train my model which places a trained model in a folder. The second command then runs the model. However, right now both commands start together and the image is loaded twice as well as the volume is created twice.
So my question is if it is possible to run multiple commands one after another, how does that work? I wonder as well, how my trained model is put into the volume docker-compose is running on? Can I somehow set a path to that volume as an output?
My docker-compose file so far:
version: '3.3'
networks: {rasa-network: {}}
services:
rasa:
image: rasa/rasa:latest-full
ports:
- "5005:5005"
volumes:
- ./rasa/:/app/
command: run -vv -m models/test_model/ --enable-api --endpoints endpoints.yml --credentials credentials.yml
networks: ['rasa-network']
depends_on:
- training
- duckling
duckling:
image: rasa/duckling:latest
ports:
- "8000:8000"
networks: ['rasa-network']
training:
build: .
image: rasa/rasa:latest-full
command: train --data data/ -c config.yml -d domain.yml --out models/test_model
volumes:
- ./rasa/:/app/
According to the documentation of depends_on, Docker compose is not able to determine the readiness of a container, so as soon as the dependencies have started, the last container will start, ignoring if the other ones are ready or not.
The workaround you could do is to do a wrapper shell script that controls that the dependencies (duckling and training) have finished doing their stuff before starting rasa. This means, if rasa needs some files from the other two containers, you can create an script to check if these files exist with a loop. If so, exit the loop and run the command you have. Otherwise, sleep some seconds and retry.
Then, the rasa command would execute only this script, for example:
command: ["./wait-for-dependencies.sh", "duckling", "training"]
You can have a look here: https://docs.docker.com/compose/startup-order/, they have made some examples for a similar use-case.

docker-compose up not recreate container

I create two containers, one is an oracle db and one is an apache tomcat.
I run both of them using the following docker compose:
version: '3.4'
services:
tomcat:
build: ./tomcat/.
ports:
- "8888:8080"
- "59339:59339"
depends_on:
- oracle
volumes:
- ./tomcat/FILES:/usr/test/FILES
- ./ROOT.war:/opt/tomcat/webapps/ROOT.war
expose:
- "8888"
- "59339"
oracle:
build: ./database/.
ports:
- "49161:1521"
environment:
- ORACLE_ALLOW_REMOTE=true
expose:
- "49161"
I use the command docker-compose up that in according with the documentation it must be recreate the container.
But in reality it start only the old containers (same containers ID) with the state of the containers when it was stoped, this is a problem because I use it for testing and I want to start from a clean situation (ROOT.war must be deployed every time i run the command).
It is normal or I miss something.
I'm using docker for windows 18.06.1-ce and Compose 1.22.0
UPDATE
So is not true that up recreate container but do it only if something changed?
I also see docker-compose down that remove the container and force up to recreate them, is the right approch?
The things that I not uderstand is why the status of the container was saved every time i stoped it (file app.pid create by tomcat still present after a simple up without a previous down)
docker-compose starts and stops containers, if you want to recreate them every time you have to pass the --force-recreate flag as per the docs.
Yes, this is as expected.
Sounds like you want to do a restart:
docker-compose restart
or to force a rebuild:
docker-compose --build start
--force-recreate will recreate the contianers
From Docs
--force-recreate => Recreate containers even if their configuration and image haven't
changed.
docker-compose up -d --force-recreate

Resources