Docker : Build a base image and make Dockerfile point to it - docker

I intend to build and run a dockerized container using an image that is built locally and not using docker hub. My use case is the following :
Cloned an open source repo source code(https://github.com/jitsi/jitsi-meet).
They have their own dockerized version too which build from the source and is deployed
on Docker Hub:(https://github.com/jitsi/docker-jitsi-meet)
Renamed the filenames and the contents inside the filenames of jitsi-meet for my own ease of use.
Packaged as a 7z packaged with the final changes.
I now Require to build the image locally using the code from the 7z package/ OR source code folder itself without uploading the image to the public docker hub.
RUN specific set of commands inside the Dockerfile.
my Dockerfile:
ARG MYPROJECT_REPO=myproject
ARG BASE_TAG=stable
FROM ${MYPROJECT_REPO}/base:${BASE_TAG}
LABEL org.opencontainers.image.title="Myproject"
LABEL org.opencontainers.image.url="https://myproject.org/myproject-meet/"
LABEL org.opencontainers.image.source="https://github.com/myproject/docker-myproject-meet"
LABEL org.opencontainers.image.documentation="https://myproject.github.io/handbook/"
ADD https://raw.githubusercontent.com/acmesh-official/acme.sh/2.8.8/acme.sh /opt
COPY rootfs/ /
RUN apt-dpkg-wrap apt-get update && \
apt-dpkg-wrap apt-get install -y cron nginx-extras myproject-meet-web socat curl jq && \
mv /usr/share/myproject-meet/interface_config.js /defaults && \
rm -f /etc/nginx/conf.d/default.conf && \
apt-cleanup
EXPOSE 80 443
VOLUME ["/config", "/usr/share/myproject-meet/transcripts"]
My docker-compose.yml (Only uploading relevant parts):
services:
# Frontend
myproject_webserver:
container_name: myproject-webserver
build:
dockerfile: ./Dockerfile
context: ./
#image: jitsi/web:${JITSI_IMAGE_VERSION:-unstable}
restart: ${RESTART_POLICY:-unless-stopped}
ports:
- '${HTTP_PORT}:80'
- '${HTTPS_PORT}:443'
volumes:
- ${CONFIG}/web:/config:Z
- ${CONFIG}/web/crontabs:/var/spool/cron/crontabs:Z
- ${CONFIG}/transcripts:/usr/share/myproject-meet/transcripts:Z
environment:
- AMPLITUDE_ID
- ANALYTICS_SCRIPT_URLS
As you can see i have commented out the public docker image of jitsi from docker hub and used build context instead. I need to build a local image and deploy to the DockerFile.
My core problem stems from the issue of renaming files/folders and the contents of the same.
Kindly correct my understanding of the following :
If i had used the core code i could have made minute changes to the code itself which are necessary without renaming and used a COPY command in DockerFile which would be used instead of the core file keeping everything else intact and also keeping the image line in docker-compose.yml as is.
So if the original repo has folder A/filenamea.js running inside a container :
Can docker COPY command be used if I have folder A1/filenamea1.js' as
renamed files to replace and run instead of the ones inside the container folder
A/filenamea.js?

Related

Docker force-build parent image

I'm running a multi-service application with several images. The environment of each image is pretty much similar, so, in order to avoid code duplication, a "base" image is created/tagged with the required programs/configuration. Then, this "base" image is used as a parent image for the various "application" images. An (illustrative) example is given below:
dockerfile_base: which I build with docker build -f dockerfile_base -t app_base:latest .
FROM ubuntu:latest
RUN apt-get update && apt-get install -y
build-essentials
dockerfile_1: which is built with docker build -f dockerfile_1 -t app_1 .
FROM app_base:latest
COPY . .
RUN make test
And finally an example dockerfile_2 which describes a different service based again on "app_base" and is built with docker build -f dockerfile_2 -t app_2 .
FROM app_base:latest
COPY . .
RUN make deploy
Usually, the "base" image is built manually at first. Then, the "app" images are also manually built. Finally, the services (images app_1, app_2, etc.) are run using docker run for tests or docker-compose for demo deployment.
This creates an issue: When working on a new workspace (e.g. a newcomer's PC) where no docker images are yet created, or when something changes in the "dockerfile_base", running just the docker build command for the app images will result in error or incorrect images. So, the question is: is there a way in docker to define these chain-builds? I guess that's difficult for docker build command, but would it be possible with docker-compose?
OK, so this is what I came up with which essentially streamlines the whole multi-build multi-image process with just 2 commands. The docker-compose.yaml file was created like this:
version: "3.4"
services:
# dummy service used only for building the images
dummy_app_base:
image: app_base:latest
build:
dockerfile: "${PWD}/dockerfile_base"
context: "${PWD}"
command: [ "echo", "\"dummy_app_base:latest EXIT\"" ]
app_1:
image: app_1:latest
build:
dockerfile: "${PWD}/dockerfile_1"
context: "${PWD}"
app_2:
image: app_2:latest
build:
dockerfile: "${PWD}/dockerfile_2"
context: "${PWD}"
So, to build all the images, I simply run docker-compose build. The build command essentially builds and tags all the images in the order they appear in the docker-compose.yaml file, so when building app_1 and app_2, the dependency app_base:latest is already built. Then, running everything with docker-compose up. Note: This WILL create a dangling container for dummy_app_base service, but overriding its command with an echo, it will simply exit immediately.
edit: even in one command: docker-compose up --build
Multi-stage builds were invented for problems like this. An example might be:
FROM ubuntu:latest as app_base
RUN apt-get update && apt-get install -y build-essentials
FROM app_base as app_name
COPY . .
RUN make

Docker volume not being mounted in container

I'm trying to mount a volume using docker-compose so I can hot reload some C code when developing. I've used Docker a couple times before and specifically hit this use case while working on a nodejs website but I'm completely at a loss here.
My docker-compose.yml and Dockerfile have been stripped down entirely to just the bare minimum. I would like to mount my current directory (all source code) into the container. My Dockerfile just installs some dependencies, sets the working directory and then attempts to run the makefile while my docker-compose.yml adds the volume. The result is a container that cannot access the mounted volume and resulting code (it's nothing wrong with the Makefile as it works on the host directory and when copied in, instead of using a volume). Does anyone see anything wrong with either of these files? It appears the /cfs folder isn't even being created in the container. I tried mounting it to the home directory to no avail.
docker-compose.yml
version: '3'
services:
cfs:
volumes:
- ./:/cfs
build:
context: ./
dockerfile: ./Dockerfile
networks:
- default
networks:
default:
internal: true
Dockerfile
FROM ubuntu:20.04
# install dependencies
RUN apt-get -qy update \
&& apt-get -y install \
cmake=3.16.3-1ubuntu1 \
make=4.2.1-1.2 \
gcc=4:9.3.0-1ubuntu2 \
g++=4:9.3.0-1ubuntu2
WORKDIR /cfs
RUN make prep
RUN make
RUN make install
Most Compose settings aren't visible during an image build. volumes: aren't mounted, environment: variables aren't set, networks: aren't accessible. Only the settings within the immediate build: block have an effect.
That means you should look at the Dockerfile in isolation, ignoring the docker-compose.yml file. At that point, the /cfs directory is empty (you don't COPY any source code into it), so the RUN make ... commands will fail. It doesn't matter that the directory will eventually have something else mounted over it.
If you're just planning to recompile the application when the source code changes, delete the volumes:, COPY the source into the image, and run docker-compose build at the point you'd typically run make. If you do have a setup that can rebuild the application when its source changes, you need to set the image's CMD to launch that, but if you don't COPY the source in, you can't build it at image build time. (...and if you're going to overwrite the entire interesting content of the image with a volume, it will get lost anyways.)

Docker volume not working with Docker-compose to generate Doxygen documentation

I'm trying to generate a Doxygen documentation in a Dockerfile using a Docker-compose to run all the services at the same time.
The goal is to generate the documentation in the container and to retrieve the generated files in local.
Here is my Dockerfile:
FROM ubuntu:latest
RUN apt-get update -y
RUN apt-get install -y doxygen doxygen-gui doxygen-doc graphviz
WORKDIR /doc
COPY Doxyfile .
COPY logo.png .
RUN doxygen Doxyfile
Here is the docker-compose with the doc service:
version: "3"
services:
doc:
build: ./doc
volumes:
- ./documentation:/doc
The doc is generated on the container and a new directory named "documentation" is generated but it is empty. How can I solve it to be filled with the generated documentation from the container ?
The goal is to generate the documentation in the container and to
retrieve the generated files in local.
You use a local directory as source of the mount here : - ./documentation:/doc.
It will make the /doc directory on the container to be synchronized with the ./documentation on the host but the source of the content is the host, not the container.
To get generated files on the host you can use a named volume instead of :
volumes:
- documentation-doxygen:/doc
After the container run you could get more information on that volume (location among other things) with docker volume inspect documentation-doxygen.
But if you mount the volume only to get the created folder, I think that you don't need to use volume at all.
A straighter alternative is simply doing the copy of the folder on the host after the container run :
docker copy DOC_CONTAINER_ID:/doc ./documentation-doxygen
As another alternative, if you want to execute doxygen Doxyfile in a local context in terms of folder but in a container (possible way in local env), you could replace RUN BY CMD or ENTRYPOINT to execute it as the container startup command and mount the current directory as a bind mount.
It will spare you some copies in the Dockerfile.
FROM ubuntu:latest
RUN apt-get update -y
RUN apt-get install -y doxygen doxygen-gui doxygen-doc graphviz
WORKDIR /doc
# REMOVE THAT COPY Doxyfile .
# REMOVE THAT COPY logo.png .
ENTRYPOINT doxygen Doxyfile
And the docker-compose part :
version: "3"
services:
doc:
build: ./doc
volumes:
- ./:/doc
Here ./is specified to use as bind source the base directory of the context of the doc service.

Why are files I generate with a bash script within a docker container also saving locally?

I have an aws/appium test project I want to run in docker. I have a bash script that runs in the container which downloads a file from S3 and creates a zip of my project.
The Dockerfile:
FROM maven:3.3.9
RUN apt-get update && \
apt-get -y install python && \
apt-get -y install python-pip && \
pip install awscli
RUN export PATH=$PATH:/usr/local/bin
There's a docker compose file, the command runs a bash script:
version: '2'
volumes:
maven_cache: ~
services:
application: &application
build: .
tmpfs:
- /tmp:rw,nodev,noexec,nosuid
volumes:
- ./:/app
- maven_cache:/root/.m2/repository
working_dir: /app
command: ./aws-upload.sh
This is the beginning of the ./aws-upload.sh bash script. It prepares the files I need for uploading later:
#!/usr/bin/env bash
mvn clean package -DskipTests=true
aws s3 cp s3://<bucket-name>/app.apk $(pwd)
cp target/zip-with-dependencies.zip $(pwd)
I only want the above files to exist within the container, however they appear locally also. Is there something in my docker-compose file that isn't configured correctly?
Thanks
In your compose file you are defining a volume ./:/app which maps the host folder where the compose file is located to the containers app folder. If you execute your bash script in the app folder it will also make the files it is creating available on the host.
If you want to avoid this either remove the volume mapping (in case you don't need it) or execute the script in another folder which is not mapped to your host.
This is normal. When you declared the following inside the composefile:
volumes:
- ./:/app
This means mount the current host directory onto /app inside the container. This will effectivelty keep the current directory and the /app folder inside the container in sync.
Thus if the aws-upload.sh script creates files in /app, they will also show next to the compose file.

Running composer install in docker container

I have a docker-compose.yml script which looks like this:
version: '2'
services:
php:
build: ./docker/php
volumes:
- .:/var/www/website
The DockerFile located in ./docker/php looks like this:
FROM php:fpm
RUN php -r "copy('https://getcomposer.org/installer', 'composer-setup.php');"
RUN php composer-setup.php
RUN php -r "unlink('composer-setup.php');"
RUN mv composer.phar /usr/local/bin/composer
RUN composer update -d /var/www/website
Eventho this always fails with the error
[RuntimeException]
Invalid working directory specified, /var/www/website does not exist.
When I remove the RUN composer update line and enter the container, the directory does exist and contains my project code.
Please tell me if I am doing anything wrong OR if I'm doing the composer update on a wrong place
RUN ... lines are run when the image is being built.
Volumes are attached to the container. You have at least two options here:
use COPY command to, well, copy your app code to the image so that all commands after that command will have access to it. (Do not push the image to any public Docker repo as it will contain your source that you probably don't want to leak)
install composer dependencies with command run on your container (CMD or ENTRYPOINT in Dockerfile or command option in docker-compose)
You are mounting your local volume over your build directory so anything you built in '/var/www/website' will be mounted over by your local volume when the container runs.

Resources