Apoligies if this question is dumb or naive... we are still learning docker. We are running Airflow in docker. Here are the docker images on our GCP compute engine:
ubuntu#our-airflow:~/airflow-dir$ docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
our-airflow_webserver latest aaaaaaaaaaaa 17 minutes ago 968MB
<none> <none> bbbbbbbbbbbb 22 minutes ago 2.13GB
apache/airflow 2.1.4 cccccccccccc 5 weeks ago 968MB
<none> <none> dddddddddddd 2 months ago 2.01GB
python 3.7-slim-buster eeeeeeeeeeee 17 months ago 155MB
postgres 9.6 ffffffffffff 17 months ago 200MB
ubuntu#our-airflow:~/airflow-dir$
dddddddddddd was the image that used to run when we ran docker-compose up from the command line. However, we were testing a new Dockerfile, and built the new image aaaaaaaaaaaa with the tag our-airflow_webserver. dddddddddddd used to have this tag, but it was changed to <none> when we built aaaaaaaaaaaa.
We'd like to run docker-compose up dddddddddddd, however this does not work. We get the error ERROR: No such service: dddddddddddd. How can we create a container using the image dddddddddddd with docker-compose up? Is this possible?
Edit: If I simply run docker run dddddddddddd, I do not get the desired output. I think this is because our docker-compose file is launching all of the requisite services we need for airflow (webserver, scheduler, metadata db).
Edit2: Here's the seemingly relevant webserver part of our docker-compose file:
webserver:
# image:
build:
dockerfile: Dockerfile.Self
context: .
can we simply uncomment image, and set it to image: dddddddddddd and then comment out the build part?
can we simply uncomment image, and set it to image: dddddddddddd
Yes, you can. If you want to start the service with another image you must change the docker-compose.yml file.
and then comment out the build part?
You don't need to comment the build part. The build just takes action when the image specified is not found or the --build option is passed as argument.
If you want to ensure that the image is not gonna be build, just pass the argument --no-build to docker-compose up command. This will avoid to build the image even if it's missing.
Check the docker-compose up docs for further information.
Related
My process for updating a docker image to production (a docker swarm) is as follows:
On dev environment:
docker-compose build
docker push myrepo/name
Then on the prod server, which is a docker swarm:
docker pull myrepo/name
docker service update --image myrepo/name --with-registry-auth containername
This works perfectly; the swarm is updated with the latest image.
However, it always leaves the old image on the live servers and I'm left with something like this:
docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
myrepo/name latest abcdef 14 minutes ago 1.15GB
myrepo/name <none> bcdefg 4 days ago 1.22GB
myrepo/name <none> cdefgh 6 days ago 1.22GB
Which, over time results in a heap of disk space being unnecessarily used.
I've read that docker system prune is not safe to run on production especially in a swarm.
So, I am having to regularly, manually remove old images e.g.
docker image rm bcdefg cdefgh
Am I missing a step in my update process, or is it 'normal' that old images are left over to be manually removed?
Thanks in advance
since you are using docker swarm and probably multi node setup you could deploy a global service which would do the cleanup for you. We are using Bret Fisher's approach on it:
version: '3.9'
services:
image-prune:
image: internal-image-registry.org/proxy-cache/library/docker:20.10
command: sh -c "while true; do docker image prune -af --filter \"until=4h\"; sleep 14400; done"
networks:
- bridge
volumes:
- /var/run/docker.sock:/var/run/docker.sock
deploy:
mode: global
labels:
- "env=devops"
- "application=cleanup-image-prune"
networks:
bridge:
external: true
name: bridge
When adding new hosts it gets deployed automatically on it with our own base docker image and then does the cleanup job for us.
We are still missing some time to inspect newer docker service types which are scheduled on their own. It would probably be wise to move cleanup jobs to the global service replicated jobs provided by docker instead of an infinite loop in a script. It just works for us so we did not make it high priority enough to swap over to it. More info on the replicated jobs
I have a multistage Dockerfile where the first stage is very compute intensive, so I want to make sure its cached.
I created a simple Dockerfile and docker-compose.yaml to show what I mean.
Even if I re-build it several times, only the second stage is running from cache. The first stage is always re-building.
Dockerfile
FROM node:15.4 as build-stage
RUN sleep 5
RUN touch /test
# Stage 1, based on Nginx, to have only the compiled app, ready for production with Nginx
FROM nginx:1.17-alpine
COPY --from=build-stage /test /test
docker-compose.yaml
version: '3'
services:
myservice:
build:
context: ./
cache_from:
- myservice:latest
- myservice:build-stage
image: myservice:latest
After doing docker-compose build here I can see the following images
REPOSITORY TAG IMAGE ID CREATED SIZE
myservice latest ae96394ac660 15 seconds ago 19.9MB
<none> <none> d4c212c2bcd1 15 seconds ago 936MB
node 15.4 6f7f341ab8b8 9 days ago 936MB
nginx 1.17-alpine 89ec9da68213 8 months ago 19.9MB
One could assume that the one called <none> is the build-stage image. But since it's not tagged I can't make the cache_from work.
If I had used only docker and not docker-compose I would do:
docker build . --target build-image -t myservice:build-image
This would build only the first stage and tag it so it can be used as cache afterwards. But is this possible with docker-compose?
I mean, there is a target: option in the docker-compose.yaml but if I add a separate service to build and tag only the first stage, that will also want to start when I do docker-compose up.
I created ubuntu image using docker-compose. Here is the relevant code from docker-compose.yml:
ubuntu-os:
container_name: ubuntu
image: ubuntu
volumes:
- ubuntu-datavolume:/home/username/docker/.os/ubuntu/
volumes:
ubuntu-datavolume:
It gets stopped as soon as it is started. I can not interract with the container. Here is relevant docker ps -a:
03dae5416b67 ubuntu "/bin/bash" 12 minutes ago Exited (0) 3 minutes ago ubuntu
I have tried every possible combo of docker start -a ubuntu but with no luck. I want this image to persist data across restart so I created the volume. Any suggestions?
Creating a new container is not what I am looking for but to start the existing container. I don't want to run the container but start and interact.
You use a ubuntu image which have an entry point /bin/sh. If you launch this without interactive/terminal linked, it will just run and exit with code 0. Your container finish successfully.
You can add the option:
stdin_open: true
tty: true
Referenced https://docs.docker.com/compose/compose-file/#domainname-hostname-ipc-mac_address-privileged-read_only-shm_size-stdin_open-tty-user-working_dir
or add command line that do something. like: command: sleep 600000
Assume I have such Dockerfile:
FROM ubuntu:16.04
EXPOSE 5000
CMD ["some", "app"]
And such docker-compose.yml:
version: '2.3'
services:
app:
build:
context: .
exposes:
- 6000
After I do docker-compose up -d docker ps outputs:
CONTAINER ID NAMES STATUS CREATED PORTS
3315ec1be1b3 app Up 41 hours 41 hours ago 5000/tcp, 6000/tcp
Is it possible to unexpose port 5000 and leave only 6000?
The clean way is to create a new image: usually, you always want your docker container and images to be reproducible. If you manually change something deviating from your image, you rob yourself of that behaviour (something someone else managing the infrastructure you're working in would expect.)
Right now, directly managing the DNAT with iptables is the way to go. An implementation of this approach is given in this answer.
Indeed, any dockerfile and container will inherit all the ports that were once configured. It is part of the metadata of an image. There is no UNEXPOSE but there is a workaround to "docker save" an image with all layers, edit the metadata, and "docker load" back the modified image including history.
As I had that task regularly, I have created a little script to do the work, please have a look at docker-copyedit.
Problem
I want to run a webapp via Docker by running 2 containers as a unit.
1 container runs my web-server (Tomcat 7).
The other container runs my database (Postgres 9.4).
I can run docker-compose up and Docker is able to spin up my two containers as specified in my docker-compose.yml:
web:
build: .
ports:
- "5000"
links:
- db
db:
image: postgres
I'd like to be able to spin up another copy of my webapp by running docker-compose up again, but this results in Docker telling me that there are already containers running:
$ docker-compose up -d
Creating composetest_db_1
Creating composetest_web_1
$ docker-compose up -d
composetest_db_1 is up-to-date
composetest_web_1 is up-to-date
My work around
I've gotten around this issue by using the -p option to give new copies different project names:
$ docker-compose -p project1 up -d
...
Successfully built d3268e345f3d
Creating project1_web_1
$ docker-compose -p project2 up -d
...
Successfully built d3268e345f3d
Creating project2_web_1
Unfortunately, this creating new images for each copy:
$ docker images
project1_web latest d3268e345f3d 2 hours ago 682 MB
project2_web latest d3268e345f3d 2 hours ago 682 MB
Question
Is there a way to use docker-compose to spin up multiple instances of a multi-container app by using a single image?
You can re-use your docker compose template by specifying the project name (which defaults to the directory name):
$ docker-compose --project-name inst1 up -d
Creating inst1_web_1
$ docker-compose --project-name inst2 up -d
Creating inst2_web_1
You could also scale up the container instances within a project:
$ docker-compose --project-name inst2 scale web=5
Creating and starting 2 ... done
Creating and starting 3 ... done
Creating and starting 4 ... done
Creating and starting 5 ... done
There should now be 6 containers running:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5e4ab4cebacf tomcat:8.0 "catalina.sh run" 43 seconds ago Up 42 seconds 0.0.0.0:32772->8080/tcp inst2_web_2
ced61f9ac2db tomcat:8.0 "catalina.sh run" 43 seconds ago Up 42 seconds 0.0.0.0:32773->8080/tcp inst2_web_5
efb1ef13147c tomcat:8.0 "catalina.sh run" 43 seconds ago Up 42 seconds 0.0.0.0:32771->8080/tcp inst2_web_4
58e524da3473 tomcat:8.0 "catalina.sh run" 43 seconds ago Up 42 seconds 0.0.0.0:32770->8080/tcp inst2_web_3
0f58c3c3b0ed tomcat:8.0 "catalina.sh run" 2 minutes ago Up 2 minutes 0.0.0.0:32769->8080/tcp inst2_web_1
377e3e5b03e4 tomcat:8.0 "catalina.sh run" 2 minutes ago Up 2 minutes 0.0.0.0:32768->8080/tcp inst1_web_1
If you want to reuse the image, you should build the image independent of the compose script.
run docker build -t somewebapp/web:latest
Then change your build section of docker-compose.yml to reference an image.