Docker images are not updated - docker

I want to build an image and run a container, but after some changes in code I rebuild image and run container using this command docker-compose up --build.
But in Docker Desktop in the list of images I see Created about 6 hours ago. But I did it 2 minutes ago.
I regularly delete images in docker desktop before rebuilding, but I see that behavior is not changing.
The only way out is to completely reinstall docker desktop application after 3-5 rebuilt images, but it's insane!
What's the problem? Is it some cache?
This is my docker-compose
version: '3'
services:
attachment-loader-prim:
container_name: attachment-loader
build:
context: ""
restart: always
image: attachment-loader:latest
environment:
SPRING_PROFILES_ACTIVE: "prim"
LOGGING_LEVEL_ORG_HIBERNATE_SQL: DEBUG
LOGGING_LEVEL_ORG_HIBERNATE_TYPE_DESCRIPTOR_SQL_BASICBINDER: TRACE
networks:
- loader-network
ports:
- 8005:8005
- 8085:8085
attachment-loader-sec:
container_name: attachment-loader-sec
build:
context: ""
restart: always
image: attachment-loader:latest
environment:
SPRING_PROFILES_ACTIVE: "sec"
LOGGING_LEVEL_ORG_HIBERNATE_SQL: DEBUG
LOGGING_LEVEL_ORG_HIBERNATE_TYPE_DESCRIPTOR_SQL_BASICBINDER: TRACE
networks:
- loader-network
ports:
- 8006:8005
- 8086:8086
networks:
loader-network:
attachable: true
This is my dockerfile
FROM adoptopenjdk/openjdk11:alpine-jre
VOLUME /tmp
ARG TZ='Europe/Berlin'
RUN sed -i 's/dl-cdn.alpinelinux.org/uk.alpinelinux.org/' /etc/apk/repositories
RUN apk upgrade --update \
&& apk add -U tzdata curl jq \
&& cp /usr/share/zoneinfo/${TZ} /etc/localtime \
&& apk del tzdata \
&& rm -rf \
/var/cache/apk/*
RUN echo ${TZ} > /etc/timezone
ARG DEPENDENCY=build/dependency
COPY ${DEPENDENCY}/BOOT-INF/lib /app/lib
COPY ${DEPENDENCY}/META-INF /app/META-INF
COPY ${DEPENDENCY}/BOOT-INF/classes /app
ENTRYPOINT ["java","-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=*:8005","-cp","app:app/lib/*","com.path.to.your.Application.kt"]

After docker-compose build --no-cache To make sure it restarts with the most updated image I use:
docker-compose up -d <service> --force-recreate

To force a build with no cache, you could try using the --no-cache option which will rule out any caching option from actually happening. You could also force clean-up all the images by using something along these lines
docker-compose down -v --rmi all --remove-orphans
and then try the rebuild again.
Without more details or seeing the actual docker-compose file being used, this is a general attempt to resolve the issue.
EDIT: Based on the example you are showing, it may not pick up any changes in the dockerfile for the new image that would be built, which a --build would then just use the cached image that was built earlier. If you have the output of the build that would be helpful, but if there is no reason for docker to build a new image, it will skip in favor of the cache image that already exists. Try the --no-cache option in the build and check all COPY/RUN args that you expect to trigger a new build actually will.

Related

Exposing Docker Volumes to Nginx

I'm trying to connect a Json file which resides in a docker volume of the following container to my main docker container which is running a django project.
Since I am using Caprover my Docker Compose options are very limited.
So Docker Composer is not really an option. I want to instead just expose the json file over the web with a link.
Something like domain.com/folder/jsonfile.json
Can somebody tell me if this is possible inside this dockerfile?
The image I am using is crucial to the container so can I just add an nginx image or do I need any other changes to make this work?
Or is nginx not even necessary?
FROM ubuntu:devel
ENV TZ=Etc/UTC
ARG APP_HOME=/app
WORKDIR ${APP_HOME}
ENV DEBIAN_FRONTEND=noninteractive
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime
RUN echo $TZ > /etc/timezone
RUN apt-get update && apt-get upgrade -y
RUN apt-get install gnumeric -y
RUN mkdir -p /etc/importer/data
RUN mkdir /voldata
COPY config.toml /etc/importer/
COPY datasets/* /etc/importer/data/
VOLUME /voldata
COPY importer /usr/bin/
RUN chmod +x /usr/bin/importer
COPY . ${APP_HOME}
CMD sleep 999d
Using the same volume in 2 containers
docker-compose:
volumes:
shared_vol:
services:
service1:
volumes:
- 'shared_vol:/path/to/file'
service2:
volumes:
- 'shared_vol:/path/to/file'
the mechanism above replaces the volumes_from since v3, but this works for v2 as well:
volumes:
shared_vol:
services:
service1:
volumes:
- 'shared_vol:/path/to/file'
service2:
volumes_from:
- service1
If you want to avoid unintentional altering add :ro for readonly to the target service:
service1:
volumes:
- 'shared_vol:/path/to/file'
service2:
volumes:
- 'shared_vol:/path/to/file:ro'
http-server
Surely you can provide the file via http (or other protocol). There are two oppertunities:
Including a http-service to your container (quite easy depending on what is already given in the container) e.g. using nodejs you can use this https://www.npmjs.com/package/http-server very easy. Size doesn't matter? So just install:
RUN apt-get install -y nodejs npm
RUN npm install -g http-server
EXPOSE 8080
CMD ["http-server", "--cors", "-p8080", "/path/to/your/json"]
docker-compose (Runs per default on 8080, so open this):
existing_service:
ports:
- '8080:8080'
Run a stand alone http-server (nginx, apache httpd,..) in another container, but then you depend again on using the same volume for two services, so for local solutions quite an overkill.
Base image
If you don't have good reasons i'll would never use something like :devel, :rolling or :latest as base image. Stick to a LTS version instead like ubuntu:22.04
Testing for http-server
Dockerfile
FROM ubuntu:20.04
ENV TZ=Etc/UTC
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
RUN apt-get update
RUN apt-get install -y nodejs npm
RUN npm install -g http-server#13.1.0 # Issue with JSON-File in V14: https://github.com/http-party/http-server/issues/634
COPY ./test.json ./usr/wwwhttp/test.json
EXPOSE 8080
CMD ["http-server", "--cors", "-p8080", "/usr/wwwhttp/"]
# docker build -t test/httpserver:latest .
# docker run -p 8080:8080 test/httpserver:latest
Disclaimer:
I am not that familiar with node-docker-images, this is just to give a quick working solution and go on from there. I'm not using nodeJS in production, but I'm sure it can be optimized from being fat to.. well.. being rather fat. But for quick prototyping size doesn't matter.
If you want to just have two containers access the same file, just use a volume with --mount.

Why isn't docker-compose fully executing the contents of a container that another container is dependent upon?

I want to start a Python container dependent on a database container. But I would like the Python container to start only after the sql server container has fully executed. I built this docker-compose.yml file ...
version: "3.2"
services:
sql-server-db:
restart: always
build: ./
container_name: sql-server-db
image: microsoft/mssql-server-linux:2017-latest
env_file: /Users/davea/my_project/api/tests/.test_env
ports:
- 3900:1433
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=password
- DB_HOST=0.0.0.0
- DB_NAME=my_db
- DB_USER=SA
- DB_PASS=password
volumes:
- ../../CloudDB/CloudDB:/sqlscripts
python:
restart: always
build: ../
environment:
DEBUG: 'true'
volumes:
- /Users/davea/my_project/api:/my-app
depends_on:
- sql-server-db
Below is my Dockerfile for the sql server container ...
FROM microsoft/mssql-server-linux:latest
RUN apt-get update
RUN apt-get install unzip -y
RUN apt-get install tzdata
ENV TZ=America/New_York
RUN ln -fs /usr/share/zoneinfo/$TZ /etc/localtime && dpkg-reconfigure -f noninteractive tzdata
RUN date
RUN echo "========="
# Install sqlpackage, needed for deplying dacpac file
RUN wget -progress=bar:force -q -O sqlpackage.zip https://go.microsoft.com/fwlink/?linkid=873926 \
&& unzip -qq sqlpackage.zip -d /opt/sqlpackage \
&& chmod +x /opt/sqlpackage/sqlpackage
# Create work directory
RUN mkdir -p /usr/work
WORKDIR /usr/work
# Copy all SQL scripts into working directory
COPY . /usr/work/
# Grant permissions for the import-data script to be executable
RUN chmod +x /usr/work/import-data.sh
RUN pwd
CMD /bin/bash ./entrypoint.sh
but I'm noticing something strange. The SQL server container does not seem to be fully executing all the commands in the entrypoint.sh file. I see this output ...
...
Removing intermediate container 72550d896ede
---> ae6b93ca884e
Step 14/15 : RUN pwd
---> Running in f229ef6fec4c
/usr/work
Removing intermediate container f229ef6fec4c
---> 7758242bbd95
Step 15/15 : CMD /bin/bash ./entrypoint.sh
---> Running in 76fa5c8308e3
Removing intermediate container 76fa5c8308e3
---> 567633ad757f
Successfully built 567633ad757f
Successfully tagged microsoft/mssql-server-linux:2017-latest
WARNING: Image for service sql-server-db was built because it did not already exist. To rebuild this image you must use `docker-compose build` or `docker-compose up --build`.
Building python
Step 1/17 : FROM python:3.8-slim
Below are the contents of the entrypoint.sh file. Is there another way I can structure things so taht the commands are executed? I'm noticing the Python container doesn't seem to recognize the SQL server container either.
#!/bin/bash -l
/usr/work/import-data.sh & /opt/mssql/bin/sqlservr
Is there somethign else I need to do to get the shell script from my sql server container to fully execute?
Your usage of depends_on option is incorrect, or perhaps not working the way in which you intend it to work.
See: Documentation of depends_on. It clearly state it does not wait for the database to be ready in case of sql servers.
Depends_on implies only to wait until the service is up and running.
depends_on does not wait for db and redis to be “ready” before starting web - only until they have been started. If you need to wait for a service to be ready, see Controlling startup order for more on this problem and strategies for solving it.
You will benefit to create some sort of manual "wait-for-it" script (as seen in this docker-compose example) before starting python container.

How can I create DockerFile from docker-compose file

I have an existing docker-compose file
version: '3.6'
services:
verdaccio:
restart: always
image: verdaccio/verdaccio
container_name: verdaccio
ports:
- 4873:4873
volumes:
- conf:/verdaccio/conf
- storage:/verdaccio/storage
- plugins:/verdaccio/plugins
environment:
- VERDACCIO_PROTOCOL=https
networks:
default:
external:
name: registry
I would like to use an DockerFile instead of docker-compose as it will be more easy to deploy DockerFile on an Azure container registry.
I have tried many solution posted on blogs and others but nothing worked as I needed.
How can I create simple DockerFile from the above docker-compose file?
You can't. Many of the Docker Compose options (and the equivalent docker run options) can only be set when you start a container. In your example, the restart policy, published ports, mounted volumes, network configuration, and overriding the container name are all runtime-only options.
If you built a Docker image matching this, the most you could add in is setting that one ENV variable, and COPYing in the configuration files and plugins rather than storing them in named volumes. The majority of that docker-compose.yml would still be required.
If you want to put conf, storage and plugins files/folders into image, you can just copy them:
FROM verdaccio/verdaccio
WORKDIR /verdaccio
COPY conf conf
COPY storage storage
COPY plugins plugins
but if you need to keep files/and folder changes, then you should keep them as it is now.
docker-compose uses an existing image.
If what you want is to create a custom image and use it with your docker-compose set up this is perfectly possible.
create your Dockerfile - example here: https://docs.docker.com/get-started/part2/
build an "image" from your Dockerfile: docker build -f /path/to/Dockerfile -t saurabh_rai/myapp:1.0 this returns an image ID something like 12abef12
login to your dockerhub account (saurabh_rai) and create a repo for the image to be pushed to (myapp)
docker push saurabh_rai/myapp:1.0 - will push your image to hub.docker.com repo for your user to the myapp repo. You may need to perform docker login for this to work and enter your username/password as usual at the command line.
Update your docker-compose.yaml file to use your image saurabh_rai/myapp:1.0
example docker-compose.yaml:
version: '3.6'
services:
verdaccio:
restart: always
container_name: verdaccio
image: saurabh_rai/myapp:1.0
ports:
- 4873:4873
volumes:
- conf:/verdaccio/conf
- storage:/verdaccio/storage
- plugins:/verdaccio/plugins
environment:
- VERDACCIO_PROTOCOL=https
network:
- registry
I have solve this issue by using an existing verdaccio DockerFile given below.
FROM node:12.16.2-alpine as builder
ENV NODE_ENV=production \
VERDACCIO_BUILD_REGISTRY=https://registry.verdaccio.org
RUN apk --no-cache add openssl ca-certificates wget && \
apk --no-cache add g++ gcc libgcc libstdc++ linux-headers make python && \
wget -q -O /etc/apk/keys/sgerrand.rsa.pub https://alpine-pkgs.sgerrand.com/sgerrand.rsa.pub && \
wget -q https://github.com/sgerrand/alpine-pkg-glibc/releases/download/2.25-r0/glibc-2.25-r0.apk && \
apk add glibc-2.25-r0.apk
WORKDIR /opt/verdaccio-build
COPY . .
RUN yarn config set registry $VERDACCIO_BUILD_REGISTRY && \
yarn install --production=false && \
yarn lint && \
yarn code:docker-build && \
yarn cache clean && \
yarn install --production=true
FROM node:12.16.2-alpine
LABEL maintainer="https://github.com/verdaccio/verdaccio"
ENV VERDACCIO_APPDIR=/opt/verdaccio \
VERDACCIO_USER_NAME=verdaccio \
VERDACCIO_USER_UID=10001 \
VERDACCIO_PORT=4873 \
VERDACCIO_PROTOCOL=http
ENV PATH=$VERDACCIO_APPDIR/docker-bin:$PATH \
HOME=$VERDACCIO_APPDIR
WORKDIR $VERDACCIO_APPDIR
RUN apk --no-cache add openssl dumb-init
RUN mkdir -p /verdaccio/storage /verdaccio/plugins /verdaccio/conf
COPY --from=builder /opt/verdaccio-build .
ADD conf/docker.yaml /verdaccio/conf/config.yaml
RUN adduser -u $VERDACCIO_USER_UID -S -D -h $VERDACCIO_APPDIR -g "$VERDACCIO_USER_NAME user" -s /sbin/nologin $VERDACCIO_USER_NAME && \
chmod -R +x $VERDACCIO_APPDIR/bin $VERDACCIO_APPDIR/docker-bin && \
chown -R $VERDACCIO_USER_UID:root /verdaccio/storage && \
chmod -R g=u /verdaccio/storage /etc/passwd
USER $VERDACCIO_USER_UID
EXPOSE $VERDACCIO_PORT
VOLUME /verdaccio/storage
ENTRYPOINT ["uid_entrypoint"]
CMD $VERDACCIO_APPDIR/bin/verdaccio --config /verdaccio/conf/config.yaml --listen $VERDACCIO_PROTOCOL://0.0.0.0:$VERDACCIO_PORT
By making few changes to the DockerFile I was able to build and push my docker image to azure container registry and deploy to an app service.
#Giga Kokaia, #Rob Evans, #Aman - Thank you for the suggestions it became more easy to think.

How to use docker-compose to build an application on a CI?

I would like to configure a CI such as TravisCI to build my application from Docker. My application has two part: Javascript and Python.
I thought to use docker-compose to do this:
version: '3'
services:
node:
image: node:12.8.0-buster
volumes:
- .:/srv
python:
image: python:3.7.4-buster
volumes:
- .:/src
I would like to have a Makefile such as:
all: foo bar
foo:
docker-compose exec node /bin/bash -c ' \
cd /workdir; \
npm install; \
npm run build'
bar:
docker-compose exec python /bin/bash -c ' \
cd /workdir; \
pip install sphinx; \
make html'
Is this correct to use docker compose like this? And what should I change to make it work?
docker compose not only support container run, but also image build, see this.
So, for your scenario, you should add your package build in Dockerfile and exeucte it with docker-compose up -d --build which will first build out a docker image then start the service base on the new docker image.
A simple fake code is as next, note next is just to explain the main idea, not a fully workable example, you need to add your stuff base on your real situation.
Dockerfile.node:
FROM node:12.8.0-buster
# Add related to build
ADD . /srv
# Add all package install
RUN cd /workdir && npm install && npm run build
# Others
......
Dockerfile.python:
FROM python:3.7.4-buster
# Add related to build
ADD . /srv
# Add all package install
RUN cd /workdir && pip install sphinx && make html
# Others
......
docker-compose.yaml:
version: '3'
services:
node:
build:
context: .
dockerfile: Dockerfile.node
volumes:
- .:/srv
python:
build:
context: .
dockerfile: Dockerfile.python
volumes:
- .:/src

Docker volume is not fully sync directory for one container

I've created simple project for Symfony4 based on php7.3+mariadb via docker-compose. I used Docker for Windows 10 (x64)
It works correctly at one machine but at laptop it doesn't sync correctly with container.
In root folder I have standard Symfony structure with docker files like:
- /config
- /public
- /src
....
- /env
- /docker
- .env
- docker-compose.yaml
...
My actions in Git Bash to start app:
docker-compose build
it works correctly, all actions were finished successfully
docker-compose up -d
it works correctly, both containers run successfully
docker-compose exec app bash
works correctly, console starts
ls
result is docker env
it syncs only 2 directories - docker and env
docker dir was synced not in full mode - only subdirectories structure without files
I tried to detect what reason can be for problem with files sync but I haven't enough knowledge and experience with Docker. docker-compose logs have no errors.
Maybe somebody can help how to detect the reason? It starts once time but after reboot problem occurs again...
docker-compose.yaml:
version: '3'
services:
app:
restart: unless-stopped
build:
context: .
dockerfile: docker/webserver-apache/Dockerfile
image: php:7.3.1-apache-stretch
volumes:
- "./docker/webserver-apache/sites-enabled:/etc/apache2/sites-enabled:ro"
- "./:/var/www/html"
ports:
- 8080:80
networks:
- dphptrainnet
mariadb:
restart: unless-stopped
image: mariadb:10.4.1
networks:
- dphptrainnet
volumes:
- ./env/mariadb/data:/var/lib/mysql
ports:
- 3306:3306
environment:
MYSQL_ROOT_PASSWORD: ${MYSQL_PASSWORD}
networks:
dphptrainnet:
Dockerfile:
FROM php:7.3.1-apache-stretch
# Setting up constants for an environment
ENV PHP_MEMORY_LIMIT 512M
RUN php -r "copy('https://getcomposer.org/installer', 'composer-setup.php');" && \
php composer-setup.php && \
php -r "unlink('composer-setup.php');" && \
mv composer.phar /usr/local/bin/composer
RUN apt-get update && \
apt-get install -y curl vim git zip unzip
# Setting up httpd issues
RUN echo "ServerName localhost" >> /etc/apache2/apache2.conf
RUN a2enmod rewrite headers && /etc/init.d/apache2 restart
RUN echo "127.0.0.1 dockertrain.local" >> /etc/hosts
WORKDIR "/var/www/html"
RUN a2enmod rewrite
I've found only one working solution - reshare drive for Docker:
1. Disable shared disk, click Apply
2. Enable shared disk, click Apply
3. Restart application - files were synced
But how I should detect there any problems with drive access? No errors, no logs....

Resources