I have an existing docker-compose file
version: '3.6'
services:
verdaccio:
restart: always
image: verdaccio/verdaccio
container_name: verdaccio
ports:
- 4873:4873
volumes:
- conf:/verdaccio/conf
- storage:/verdaccio/storage
- plugins:/verdaccio/plugins
environment:
- VERDACCIO_PROTOCOL=https
networks:
default:
external:
name: registry
I would like to use an DockerFile instead of docker-compose as it will be more easy to deploy DockerFile on an Azure container registry.
I have tried many solution posted on blogs and others but nothing worked as I needed.
How can I create simple DockerFile from the above docker-compose file?
You can't. Many of the Docker Compose options (and the equivalent docker run options) can only be set when you start a container. In your example, the restart policy, published ports, mounted volumes, network configuration, and overriding the container name are all runtime-only options.
If you built a Docker image matching this, the most you could add in is setting that one ENV variable, and COPYing in the configuration files and plugins rather than storing them in named volumes. The majority of that docker-compose.yml would still be required.
If you want to put conf, storage and plugins files/folders into image, you can just copy them:
FROM verdaccio/verdaccio
WORKDIR /verdaccio
COPY conf conf
COPY storage storage
COPY plugins plugins
but if you need to keep files/and folder changes, then you should keep them as it is now.
docker-compose uses an existing image.
If what you want is to create a custom image and use it with your docker-compose set up this is perfectly possible.
create your Dockerfile - example here: https://docs.docker.com/get-started/part2/
build an "image" from your Dockerfile: docker build -f /path/to/Dockerfile -t saurabh_rai/myapp:1.0 this returns an image ID something like 12abef12
login to your dockerhub account (saurabh_rai) and create a repo for the image to be pushed to (myapp)
docker push saurabh_rai/myapp:1.0 - will push your image to hub.docker.com repo for your user to the myapp repo. You may need to perform docker login for this to work and enter your username/password as usual at the command line.
Update your docker-compose.yaml file to use your image saurabh_rai/myapp:1.0
example docker-compose.yaml:
version: '3.6'
services:
verdaccio:
restart: always
container_name: verdaccio
image: saurabh_rai/myapp:1.0
ports:
- 4873:4873
volumes:
- conf:/verdaccio/conf
- storage:/verdaccio/storage
- plugins:/verdaccio/plugins
environment:
- VERDACCIO_PROTOCOL=https
network:
- registry
I have solve this issue by using an existing verdaccio DockerFile given below.
FROM node:12.16.2-alpine as builder
ENV NODE_ENV=production \
VERDACCIO_BUILD_REGISTRY=https://registry.verdaccio.org
RUN apk --no-cache add openssl ca-certificates wget && \
apk --no-cache add g++ gcc libgcc libstdc++ linux-headers make python && \
wget -q -O /etc/apk/keys/sgerrand.rsa.pub https://alpine-pkgs.sgerrand.com/sgerrand.rsa.pub && \
wget -q https://github.com/sgerrand/alpine-pkg-glibc/releases/download/2.25-r0/glibc-2.25-r0.apk && \
apk add glibc-2.25-r0.apk
WORKDIR /opt/verdaccio-build
COPY . .
RUN yarn config set registry $VERDACCIO_BUILD_REGISTRY && \
yarn install --production=false && \
yarn lint && \
yarn code:docker-build && \
yarn cache clean && \
yarn install --production=true
FROM node:12.16.2-alpine
LABEL maintainer="https://github.com/verdaccio/verdaccio"
ENV VERDACCIO_APPDIR=/opt/verdaccio \
VERDACCIO_USER_NAME=verdaccio \
VERDACCIO_USER_UID=10001 \
VERDACCIO_PORT=4873 \
VERDACCIO_PROTOCOL=http
ENV PATH=$VERDACCIO_APPDIR/docker-bin:$PATH \
HOME=$VERDACCIO_APPDIR
WORKDIR $VERDACCIO_APPDIR
RUN apk --no-cache add openssl dumb-init
RUN mkdir -p /verdaccio/storage /verdaccio/plugins /verdaccio/conf
COPY --from=builder /opt/verdaccio-build .
ADD conf/docker.yaml /verdaccio/conf/config.yaml
RUN adduser -u $VERDACCIO_USER_UID -S -D -h $VERDACCIO_APPDIR -g "$VERDACCIO_USER_NAME user" -s /sbin/nologin $VERDACCIO_USER_NAME && \
chmod -R +x $VERDACCIO_APPDIR/bin $VERDACCIO_APPDIR/docker-bin && \
chown -R $VERDACCIO_USER_UID:root /verdaccio/storage && \
chmod -R g=u /verdaccio/storage /etc/passwd
USER $VERDACCIO_USER_UID
EXPOSE $VERDACCIO_PORT
VOLUME /verdaccio/storage
ENTRYPOINT ["uid_entrypoint"]
CMD $VERDACCIO_APPDIR/bin/verdaccio --config /verdaccio/conf/config.yaml --listen $VERDACCIO_PROTOCOL://0.0.0.0:$VERDACCIO_PORT
By making few changes to the DockerFile I was able to build and push my docker image to azure container registry and deploy to an app service.
#Giga Kokaia, #Rob Evans, #Aman - Thank you for the suggestions it became more easy to think.
Related
I am deploying on AWS clamav
whos Dockerfile is :
FROM alpine:3.14
LABEL maintainer="Markus Kosmal <code#m-ko.de>"
RUN apk add --no-cache bash clamav clamav-daemon clamav-libunrar
COPY conf /etc/clamav
COPY bootstrap.sh /
COPY envconfig.sh /
COPY check.sh /
RUN mkdir /var/run/clamav && \
chown clamav:clamav /var/run/clamav && \
chmod 750 /var/run/clamav && \
chown -R clamav:clamav bootstrap.sh check.sh /etc/clamav && \
chmod u+x bootstrap.sh check.sh
EXPOSE 3310/tcp
USER clamav
CMD ["/bootstrap.sh"]
and since I am using a mirror I am testing locally using a docker-compose file
version: "3.7"
services:
mirror:
build:
context: .
dockerfile: mirror/Dockerfile
ports:
- "8080:8080"
clamav:
build:
context: ../clamav
environment:
CLAMAVDATABASEMIRROR: "http://0.0.0.0:8080"
depends_on:
- mirror
ports:
- "3310:3310"
services work fine and when I run docker-compose up --build I can see from the logs that the services is pulling the daily update and stuff.
if I run docker container ls
I get that clamav has ports: 3310/tcp wheras the mirror has a mapped port on my local host
0.0.0.0:8080->8080/tcp
and I can run curl localhost:8080
But If I try and curl localhost on 3310 I get
curl: (52) Empty reply from server
now: how do I perform a healthcheck on the clamav service?
Good afternoon
I'm a new user of Docker and I have a question please:
I created a container named "container_worker" which running a Python script to create some data.
The data created is stored in the container in a file named "data".
I want to copy this "data" file to my host to be able to use it for another purpose.
I saw it's possible to do it with the "docker cp" command but I want to do it directly in my Dockerfile or my Docker-compose file.
Here are my files:
Dockerfile:
FROM archlinux
RUN pacman-db-upgrade \
&& pacman -Syyu --noconfirm \
&& pacman -S python --noconfirm \
&& pacman -S python-pip --noconfirm \
&& pip install requests
COPY /worker/script.py .
CMD python3 vmtracer.py >> data
docker-compose.yml
version: "3"
services:
worker:
image: image_worker
container_name: container_worker
build:
dockerfile: ./worker/Dockerfile
Thank you very much.
You can bind/mount volume from your local machine to container and put the output to that shared location.
Use command docker volume create my-vol , to create volume.
Use command docker volume ls ,to list your volumes.
Use parameter volumes: {your-volume-name} in docker-compose file to use the created volume.
Refer the link for more : https://docs.docker.com/storage/volumes/
I'm trying to connect a Json file which resides in a docker volume of the following container to my main docker container which is running a django project.
Since I am using Caprover my Docker Compose options are very limited.
So Docker Composer is not really an option. I want to instead just expose the json file over the web with a link.
Something like domain.com/folder/jsonfile.json
Can somebody tell me if this is possible inside this dockerfile?
The image I am using is crucial to the container so can I just add an nginx image or do I need any other changes to make this work?
Or is nginx not even necessary?
FROM ubuntu:devel
ENV TZ=Etc/UTC
ARG APP_HOME=/app
WORKDIR ${APP_HOME}
ENV DEBIAN_FRONTEND=noninteractive
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime
RUN echo $TZ > /etc/timezone
RUN apt-get update && apt-get upgrade -y
RUN apt-get install gnumeric -y
RUN mkdir -p /etc/importer/data
RUN mkdir /voldata
COPY config.toml /etc/importer/
COPY datasets/* /etc/importer/data/
VOLUME /voldata
COPY importer /usr/bin/
RUN chmod +x /usr/bin/importer
COPY . ${APP_HOME}
CMD sleep 999d
Using the same volume in 2 containers
docker-compose:
volumes:
shared_vol:
services:
service1:
volumes:
- 'shared_vol:/path/to/file'
service2:
volumes:
- 'shared_vol:/path/to/file'
the mechanism above replaces the volumes_from since v3, but this works for v2 as well:
volumes:
shared_vol:
services:
service1:
volumes:
- 'shared_vol:/path/to/file'
service2:
volumes_from:
- service1
If you want to avoid unintentional altering add :ro for readonly to the target service:
service1:
volumes:
- 'shared_vol:/path/to/file'
service2:
volumes:
- 'shared_vol:/path/to/file:ro'
http-server
Surely you can provide the file via http (or other protocol). There are two oppertunities:
Including a http-service to your container (quite easy depending on what is already given in the container) e.g. using nodejs you can use this https://www.npmjs.com/package/http-server very easy. Size doesn't matter? So just install:
RUN apt-get install -y nodejs npm
RUN npm install -g http-server
EXPOSE 8080
CMD ["http-server", "--cors", "-p8080", "/path/to/your/json"]
docker-compose (Runs per default on 8080, so open this):
existing_service:
ports:
- '8080:8080'
Run a stand alone http-server (nginx, apache httpd,..) in another container, but then you depend again on using the same volume for two services, so for local solutions quite an overkill.
Base image
If you don't have good reasons i'll would never use something like :devel, :rolling or :latest as base image. Stick to a LTS version instead like ubuntu:22.04
Testing for http-server
Dockerfile
FROM ubuntu:20.04
ENV TZ=Etc/UTC
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
RUN apt-get update
RUN apt-get install -y nodejs npm
RUN npm install -g http-server#13.1.0 # Issue with JSON-File in V14: https://github.com/http-party/http-server/issues/634
COPY ./test.json ./usr/wwwhttp/test.json
EXPOSE 8080
CMD ["http-server", "--cors", "-p8080", "/usr/wwwhttp/"]
# docker build -t test/httpserver:latest .
# docker run -p 8080:8080 test/httpserver:latest
Disclaimer:
I am not that familiar with node-docker-images, this is just to give a quick working solution and go on from there. I'm not using nodeJS in production, but I'm sure it can be optimized from being fat to.. well.. being rather fat. But for quick prototyping size doesn't matter.
If you want to just have two containers access the same file, just use a volume with --mount.
Is there a way to avoid rebuilding my Docker image each time I make a change in my source code ?
I think I have already optimize my Dockerfile enough to decrease building time, but it's always 2 commands and some waiting time for sometimes just one line of code added. It's longer than a simple CTRL + S and check the results.
The commands I have to do for each little update in my code:
docker-compose down
docker-compose build
docker-compose up
Here's my Dockerfile:
FROM python:3-slim as development
ENV PYTHONUNBUFFERED=1
COPY ./requirements.txt /requirements.txt
COPY ./scripts /scripts
EXPOSE 80
RUN apt-get update && \
apt-get install -y \
bash \
build-essential \
gcc \
libffi-dev \
musl-dev \
openssl \
wget \
postgresql \
postgresql-client \
libglib2.0-0 \
libnss3 \
libgconf-2-4 \
libfontconfig1 \
libpq-dev && \
pip install -r /requirements.txt && \
mkdir -p /vol/web/static && \
chmod -R 755 /vol && \
chmod -R +x /scripts
COPY ./files /files
WORKDIR /files
ENV PATH="/scripts:/py/bin:$PATH"
CMD ["run.sh"]
Here's my docker-compose.yml file:
version: '3.9'
x-database-variables: &database-variables
POSTGRES_DB: ${POSTGRES_DB}
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
ALLOWED_HOSTS: ${ALLOWED_HOSTS}
x-app-variables: &app-variables
<<: *database-variables
POSTGRES_HOST: ${POSTGRES_HOST}
SPOTIPY_CLIENT_ID: ${SPOTIPY_CLIENT_ID}
SPOTIPY_CLIENT_SECRET: ${SPOTIPY_CLIENT_SECRET}
SECRET_KEY: ${SECRET_KEY}
CLUSTER_HOST: ${CLUSTER_HOST}
DEBUG: 0
services:
website:
build:
context: .
restart: always
volumes:
- static-data:/vol/web
environment: *app-variables
depends_on:
- postgres
postgres:
image: postgres
restart: always
environment: *database-variables
volumes:
- db-data:/var/lib/postgresql/data
proxy:
build:
context: ./proxy
restart: always
depends_on:
- website
ports:
- 80:80
- 443:443
volumes:
- static-data:/vol/static
- ./files/templates:/var/www/html
- ./proxy/default.conf:/etc/nginx/conf.d/default.conf
- ./etc/letsencrypt:/etc/letsencrypt
volumes:
static-data:
db-data:
Mount your script files directly in the container via docker-compose.yml:
volumes:
- ./scripts:/scripts
- ./files:/files
Keep in mind you have to use a prefix if you use a WORKDIR in your Dockerfile.
Quickly answer
Is there a way to avoid rebuilding my Docker image each time I make a change in my source code ?
If your app needs a build step, you cannot skip it.
In your case, you can install the requirements before the python app, so on each source code modification, you just need to run your python app, not the entire stack: postgress, proxy, etc
Docker purpose
The main docker goal or feature is to enable developers to package applications into containers which are easy to deploy anywhere, simplifying your infrastructure.
So, in this sense, docker is not strictly for the developer stage. In the developer stage, the programmer should use an specialized IDE (eclipse, intellij, visual studio, etc) to create and update the source code. Also some languages like java, c# and frameworks like react/ angular needs a build stage.
These IDEs has features like hot reload (automatic application updates when source code change), variables & methods auto-completion, etc. These features achieve to reduce the developer time.
Docker for source code changes by developer
Is not the main goal but if you don't have an specialized ide or you are in a very limited developer workspace(no admin permission, network restrictions, windows, ports, etc ), docker can rescue you
If you are a java developer (for instance), you need to install java on your machine and some IDE like eclipse, configure the maven, etc etc. With docker, you could create an image with all the required techs and the establish a kind of connection between your source code and the docker container. This connection in docker is called Volumes
docker run --name my_job -p 9000:8080 \
-v /my/python/microservice:/src \
python-workspace-all-in-one
In the previous example, you could code directly on /my/python/microservice and you only need to enter into my_job and run python /src/main.py. It will work without python or any requirement on your host machine. All will be in python-workspace-all-in-one
In case of technologies that need a build process: java & c#, there is a time penalty because, the developer should perform a build on any source code change. This is not required with the usage of specialized ide as I explained.
I case of technologies who not require build process like: php, just the libraries/dependencies installation, docker will work almost the same as the specialized IDE.
Docker for local development with hot-reload
In your case, your app is based on python. Python don't require a build process. Just the libraries installation, so if you want to develop with python using docker instead the classic way: install python, execute python app.py, etc you should follow these steps:
Don't copy your source code to the container
Just pass the requirements.txt to the container
Execute the pip install inside of container
Run you app inside of container
Create a docker volume : your source code -> internal folder on container
Here an example of some python framework with hot-reload:
FROM python:3
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY requirements.txt /usr/src/app
RUN pip install -r requirements.txt
CMD [ "mkdocs", "serve", "--dev-addr=0.0.0.0:8000" ]
and how build as dev version:
docker build -t myapp-dev .
and how run it with volumes to sync your developer changes with the container:
docker run --name myapp-dev -it --rm -p 8000:8000 -v $(pwd):/usr/src/app mydocs-dev
As a summary, this would be the flow to run your apps with docker in a developer stage:
start the requirements before the app (database, apis, etc)
create an special Dockerfile for development stage
build the docker image for development purposes
run the app syncing the source code with container (-v)
developer modify the source code
if you can use some kind of hot-reload library on python
the app is ready to be opened from a browser
Docker for local development without hot-reload
If you cannot use a hot-reload library, you will need to build and run whenever you want to test your source code modifications. In this case, you should copy the source code to the container instead the synchronization with volumes as the previous approach:
FROM python:3
RUN mkdir -p /usr/src/app
COPY . /usr/src/app
WORKDIR /usr/src/app
RUN pip install -r requirements.txt
RUN mkdocs build
WORKDIR /usr/src/app/site
CMD ["python", "-m", "http.server", "8000" ]
Steps should be:
start the requirements before the app (database, apis, etc)
create an special Dockerfile for development stage
developer modify the source code
build
docker build -t myapp-dev.
run
docker run --name myapp-dev -it --rm -p 8000:8000 mydocs-dev
I want to build an image and run a container, but after some changes in code I rebuild image and run container using this command docker-compose up --build.
But in Docker Desktop in the list of images I see Created about 6 hours ago. But I did it 2 minutes ago.
I regularly delete images in docker desktop before rebuilding, but I see that behavior is not changing.
The only way out is to completely reinstall docker desktop application after 3-5 rebuilt images, but it's insane!
What's the problem? Is it some cache?
This is my docker-compose
version: '3'
services:
attachment-loader-prim:
container_name: attachment-loader
build:
context: ""
restart: always
image: attachment-loader:latest
environment:
SPRING_PROFILES_ACTIVE: "prim"
LOGGING_LEVEL_ORG_HIBERNATE_SQL: DEBUG
LOGGING_LEVEL_ORG_HIBERNATE_TYPE_DESCRIPTOR_SQL_BASICBINDER: TRACE
networks:
- loader-network
ports:
- 8005:8005
- 8085:8085
attachment-loader-sec:
container_name: attachment-loader-sec
build:
context: ""
restart: always
image: attachment-loader:latest
environment:
SPRING_PROFILES_ACTIVE: "sec"
LOGGING_LEVEL_ORG_HIBERNATE_SQL: DEBUG
LOGGING_LEVEL_ORG_HIBERNATE_TYPE_DESCRIPTOR_SQL_BASICBINDER: TRACE
networks:
- loader-network
ports:
- 8006:8005
- 8086:8086
networks:
loader-network:
attachable: true
This is my dockerfile
FROM adoptopenjdk/openjdk11:alpine-jre
VOLUME /tmp
ARG TZ='Europe/Berlin'
RUN sed -i 's/dl-cdn.alpinelinux.org/uk.alpinelinux.org/' /etc/apk/repositories
RUN apk upgrade --update \
&& apk add -U tzdata curl jq \
&& cp /usr/share/zoneinfo/${TZ} /etc/localtime \
&& apk del tzdata \
&& rm -rf \
/var/cache/apk/*
RUN echo ${TZ} > /etc/timezone
ARG DEPENDENCY=build/dependency
COPY ${DEPENDENCY}/BOOT-INF/lib /app/lib
COPY ${DEPENDENCY}/META-INF /app/META-INF
COPY ${DEPENDENCY}/BOOT-INF/classes /app
ENTRYPOINT ["java","-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=*:8005","-cp","app:app/lib/*","com.path.to.your.Application.kt"]
After docker-compose build --no-cache To make sure it restarts with the most updated image I use:
docker-compose up -d <service> --force-recreate
To force a build with no cache, you could try using the --no-cache option which will rule out any caching option from actually happening. You could also force clean-up all the images by using something along these lines
docker-compose down -v --rmi all --remove-orphans
and then try the rebuild again.
Without more details or seeing the actual docker-compose file being used, this is a general attempt to resolve the issue.
EDIT: Based on the example you are showing, it may not pick up any changes in the dockerfile for the new image that would be built, which a --build would then just use the cached image that was built earlier. If you have the output of the build that would be helpful, but if there is no reason for docker to build a new image, it will skip in favor of the cache image that already exists. Try the --no-cache option in the build and check all COPY/RUN args that you expect to trigger a new build actually will.