Copy file before building the image in Docker - docker

So I just need a simply thing which is copying a script whose relative path (from the working dir.) is influxdb/init.iql (InfluxDB initialization script) to the path /docker-entrypoint-initdb.d/ which is a way to initialize an InfluxDB database according to InfluxDB Docker image doc:
Initialization Files
If the Docker image finds any files with the extensions .sh or .iql inside of the /docker-entrypoint-initdb.d folder, it will execute them
Right now, my docker-compose.yml is:
version: "3.3"
services:
influxdb:
image: influxdb:latest
container_name: influxdb
ports:
- "8087:8086"
volumes:
- influxdb-data:/var/lib/influxdb
volumes:
influxdb-data:
I need the script to be copied before the image gets built as if it finds the script in the specified path, it will execute it when building the image.
How can I do this? I thought about implementing a Makefile but I would rather prefer to use Docker to accomplish this if it is possible to not add an unnecessary extra piece to the project.
Thanks in advance.

The docker-compose file tells how to RUN an image, not how to BUILD it. These two are completely separate concepts. Also I'm not sure what are you trying to do. If you need to initialize your container with data, just mount a script (or an iql file) to the /docker-entrypoint-initdb.d location inside volumes of the docker-compose file, eg.:
volumes:
- influxdb-data:/var/lib/influxdb
- project_import.sh:/docker-entrypoint-initdb.d/import.sh:ro
The script(s) (or iql file(s)) will be executed when the container starts, not when the image is built. If you don't believe me check out the entrypoint script of the image to see how this process works.
Just remember that those scripts will get executed each time the container starts.

Related

cp a file from within a volume to another location in the container - just use a volume, add Dockerfile? Or can I do it within compose.yml?

I have a docker-compose file in my working directory. I don't have a Dockerfile (Yet, I'm unsure if I need one?). Here's my docker-compose file:
version: "3.5"
services:
ide-rstudio:
image: rocker/verse:latest
ports:
- 8787:8787
- 3838:3838
environment:
PASSWORD: test
ROOT: "TRUE"
ADD: "shiny"
volumes:
- ${PROJECTS_DIR}/Zen:/home/rstudio/Projects
When I run this, a new container runs as expected. In the volume I have a file /Zen/ide-rstudio/rstudio-prefs.json. I would like to add rstudio-prefs.json into my container at /home/rstudio/.config/rstudio/rstudio-prefs.json. I CAN already do this by using a volume and adding this line to my docker-compose volumes:
volumes:
- ${PROJECTS_DIR}/Zen:/home/rstudio/Projects
- ${PROJECTS_DIR}/Zen/ide-rstudio/rstudio-prefs.json:/home/rstudio/.config/rstudio/rstudio-prefs.json
My question is, if after adding the volume in the first line ${PROJECTS_DIR}/Zen:/home/rstudio/Projects the file rstudio-prefs.json already exists in the container at /home/rstudio/Projects/ide-rstudio/rstudio-prefs.json. So, I would really just like to run the following shell command after the container is started cp /home/rstudio/Projects/ide-rstudio/rstudio-prefs.json /home/rstudio/.config/rstudio/rstudio-prefs.json.
Is it possible to run a shell command within a service using docker-compose? Or, must I now create a Dockerfile?
You should use the volumes: approach you show. This works automatically and doesn't require any user intervention. There's no harm to having a second copy of the file in the container, especially a small configuration file.
You could in principle run docker-compose exec after the container starts up. There are a couple of problems with doing this. If the config file is read by the container's main process, that will happen before you have an opportunity to run debug commands like this. You'll need to remember to repeat this command every time you restart the container. If you wind up in a cluster environment like Kubernetes, you'll need to remember to do this on every replica of the container, and arrange for it to happen if the cluster restarts the container without your knowledge (for example, if a node fails).
If you want this to happen reliably, as a shell command, then you need to write an entrypoint wrapper script. This runs whatever first-time setup you need and then execs the image's original entrypoint. This is easier to do reproducibly with a custom Dockerfile, and requires some knowledge of the image's detailed setup.
The one-line volumes: to inject the same file a second time is much easier.

Nothing happens when copying file with Dockerfile

I use docker-compose for a simple keycloak container and I've been trying to install a new theme for keycloak.
However I've been unable to copy even a single file to the container usnig a Dockerfile. The Dockerfile and docker-compose.yml are in the same directory
Neither of these commands work or cause any events or warnings in the logs.
COPY test /tmp/
COPY /home/adm/workspace/docker/keycloak-cluster/docker/kctheme/theme/login/. /opt/jboss/keycloak/themes/keycloak/login/
Copying manually with
sudo docker cp test docker_keycloak_1:/tmp
works without any issues.
Quick understanding on Docker.
docker build: Create an image from a Dockerfile
docker run: Create a container from an image.
(you can create yourself the image or use a existing image from docker hub )
Based on what you said, you have 2 options.
Create a new docker image based on the existing one and add the theme.
something like
# Dockerfile
FROM jboss/keycloak
COPY test /tmp/
COPY /home/adm/workspace/docker/keycloak-cluster/docker/kctheme/theme/login/. /opt/jboss/keycloak/themes/keycloak/login/
and then use docker build to create your new image
Volume the theme in the correct directory
using docker-compose volume
version: '3'
services:
keycloak:
image: quay.io/keycloak/keycloak:latest
volumes:
- "./docker/kctheme/theme/login:/opt/jboss/keycloak/themes/keycloak/login"
Files have to be in the same directory or a subdirectory of the directory with your Dockerfile build file if you use COPY, and have to be present at build time. No absolute paths.
/tmp as destination is also a bit tricky, because the startup process of the container might have a /tmp cleanout, which means that you would never see that file in a running container.

Create Docker instance .tar including volumes

I have a local Gitlab docker image running and added a bunch of projects. This project/repository data seems to end up inside of the 3 volumes that have been created by the image.
I want to create a single .tar of the Gitlab instance which includes the complete image + all data found in the volumes. It's okay that the .tar becomes huge.
I tried to accomplish this by using docker commit and docker save but I have been unable to save the volumes along with the image.
How can I create such a single .tar export.
If I was going to set this up, I'd have a single Docker Compose file that contained all of the relevant pieces in a single directory tree.
version: '3'
services:
db:
image: 'postgres:11'
volumes:
- './postgres:/var/lib/postgresql/data'
gitlab:
image: gitlab-community-edition
# Details below here made up
ports:
- 8080:80
env:
PGHOST: db
volumes:
- './gitlab:/data'
The important thing here is that every piece of persisted data is in the same directory tree on the host. The actual contents of the container filesystem aren't important (every piece of persisted data is in these directories) and the images aren't important (they can be pulled from Docker Hub).
You may use docker cp to achieve this.
docker cp my_container:/path/to/gitlab /destination/folder
and then tar the contents of the destination folder.
You’ll need to use docker export to create the tar of the image. Docker import then to install it back. Essentially you’re looking for an installer from the sounds of it. You can write a bash script which copies whatever files you need and exports the images.

entry point of docker container dependent on local file system and not in the image

I am working on a docker container that is being created from a generic image. The entry point of this container is dependent on a file in the local file system and not in the generic image. My docker-compose file looks something like this:
service_name:
image: base_generic_image
container_name: container_name
entrypoint:
- "/user/dlc/bin"
- "-p"
- "localFolder/fileName.ext"
- more parameters
The challenge that I am facing is removing this dependency and adding it to the base_generic_image at run time so that I can deploy it independently. Should I add this file to the base generic image and then proceed(this file is not required by others) or should this be done when creating the container, if so then what is the best way of going about it.
You should create a separate image for each part of your application. These can be based on the base image if you'd like; the Dockerfile might look like
FROM base_generic_image
COPY dlc /usr/bin
CMD ["dlc"]
Your Docker Compose setup might have a subdirectory for each component and could look like
servicename:
image: my/servicename
build:
context: ./servicename
command: ["dlc", "-p", ...]
In general Docker volumes and bind-mounts are good things to use for persistent data (when absolutely required; stateless containers with external databases are often easier to manage), getting log files out of containers, and pushing complex configuration into containers. The actual program that's being run generally should be built into the base image. The goal is that you can take the image and docker run it on a clean system without any of your development environment on it.

Overwrite files with `docker run`

Maybe I'm missing this when reading the docs, but is there a way to overwrite files on the container's file system when issuing a docker run command?
Something akin to the Dockerfile COPY command? The key desire here is to be able to take a particular Docker image, and spin several of the same image up, but with different configuration files. (I'd prefer to do this with environment variables, but the application that I'm Dockerizing is not partial to that.)
You have a few options. Using something like docker-compose, you could automatically build a unique image for each container using your base image as a template. For example, if you had a docker-compose.yml that look liked:
container0:
build: container0
container1:
build: container1
And then inside container0/Dockerfile you had:
FROM larsks/thttpd
COPY index.html /index.html
And inside container0/index.html you had whatever content you
wanted, then running docker-compose build would generate unique
images for each entry (and running docker-compose up would start
everything up).
I've put together an example of the above
here.
Using just the Docker command line, you can use host volume mounts,
which allow you to mount files into a container as well as
directories. Using my thttpd as an example again, you could use the
following -v argument to override /index.html in the container
with the content of your choice:
docker run -v index.html:/index.html larsks/thttpd
And you could accomplish the same thing with docker-compose via the
volume entry:
container0:
image: larsks/thttpd
volumes:
- ./container0/index.html:/index.html
container1:
image: larsks/thttpd
volumes:
- ./container1/index.html:/index.html
I would suggest that using the build mechanism makes more sense if you are trying to override many files, while using volumes is fine for one or two files.
A key difference between the two mechanisms is that when building images, each container will have a copy of the files, while using volume mounts, changes made to the file within the image will be reflected on the host filesystem.

Resources