I am trying to containerise API automation repo to run it on ci/cd(gocd). Below is the Dockerfile content.
FROM alpine:latest
RUN apk add --no-cache python3 \
&& pip3 install --upgrade pip
WORKDIR /api-automation
COPY . /api-automation
RUN pip --no-cache-dir install .
COPY api_tests.conf /usr/.ops/config/api_tests.conf
ENTRYPOINT ["pytest" "-s" "-v" "--cache-clear" "--html=report.html"]
Below is the content of api_tests.conf configuration file.
[user]
username=<user_name>
apikey=<api_key>
[tokens]
token1=<token1>
api_tests.conf is the configuration file and it has sensitive data like API keys, tokens etc(Note: Configuration file is not encrypted). Currently I am copying this config from repo to following location /usr/.ops/config/api_tests.conf in container but i do not want to do this as there are security concerns. So how i can copy this api_tests.conf file when i run container from ci/cd machine(it means, from Dockerfile, i need to remove instruction COPY api_tests.conf /usr/.ops/config/api_tests.conf).
My second question is,
If I create a secret file using command docker secret create my_secret file_path, how i can copy this secret api_tests.conf file when i run container.
Note: Once api_tests.conf file is copied to container then i need to run command "pytest -s -v --cache-clear --html=report.html"
Please provide your inputs.
If you want to avoid putting this line COPY api_tests.conf /usr/.ops/config/api_tests.conf in dockerfile then use -v option of docker run command which mounts file/dir from host into container filesystem.
docker run -itd -v /Users/basavarajlamani/Documents/api_tests.conf:/usr/.ops/config/api_tests.conf image-name
If you want to use docker secret to copy config file
Make sure you're using docker swarm, since docker secret works with swarm orchestrator.
Create docker secret with contents of config file docker secret create api_test.conf /Users/basavarajlamani/Documents/api_tests.conf
docker secret ls will show the created secret.
Run your docker container as a service in swarm.
docker service create \
--name myservice \
--secret source=api_test.conf,target=/usr/.ops/config/api_tests.conf \
image-name
NOTE: You can also use docker config rather than docker secret, the only difference is they are not encrypted at rest and are mounted directly into the container’s filesystem.
Hope it helps.
Related
I am building my first docker image. I am a beginner
It is a simple python http server. This is my DockerFile
FROM python:3.8.0-slim
WORKDIR /src
COPY src/ .
CMD [ "python", "-m", "http.server", "--cgi", "8000"]
I have a config folder in /src with some config files.
I named the image "my-server"
I create a container with
docker run -d \
--name "my-server" \
-p 8000:8000 \
-v /dockerdata/appdata/my-server/config/:/src/config \
--restart unless-stopped \
my-server
the issue is /dockerdata/appdata/my-server/config/ is empty on my host.
I see this done on all docker images on dockerhub I use and the mounted volumes are not empty for these images.
How do they do it?
Their startup sequence explicitly copies source files into the volume, or otherwise creates them. A Docker mount always replaces the content in the image with the content of whatever's being mounted; there is no way to mount the container content to the host.
(The one exception to this "always" is, if you're using native Docker, and you're mounting a named volume, and the named volume is empty, then content from the image is copied into the volume first; but the content is never ever updated, it only works for named volumes and not other kinds of mounts, and it doesn't work on other environments like Kubernetes. I would not rely on this approach.)
If the configuration is a single file, this isn't a huge imposition. You probably already need to distribute artifacts like a docker-compose.yml file separately from the image, so distributing a default configuration isn't much more. If defaults are compiled into your application and an empty configuration is valid, this also simplifies things. Another helpful approach could be to have a search path for configuration files, and read both a "user" and "system" configuration.
If you do need to copy files out to a host directory or other mount point, I would generally do this with an entrypoint wrapper script. You will need to keep a copy of the configuration in the image somewhere that's not the actual config directory so that you can copy it when it doesn't exist. The script can be fairly straightforward:
#!/bin/sh
# Copy the default configuration if it doesn't exist
if [ ! -f config/config.yml ]; then
cp default-config/config.yml config
fi
# Run the main container command
exec "$#"
You may need to do some shuffling in your Dockerfile; the important thing is to make this script be the ENTRYPOINT but leave the CMD unchanged.
# Save the "normal" config away; the entrypoint script will create
# the "real" config if one isn't mounted
RUN mv config default-config \
&& mkdir config
# Launch the server via the entrypoint wrapper
ENTRYPOINT ["./entrypoint.sh"] # must be JSON array syntax
CMD ["python", "-m", "http.server", "--cgi", "8000"] # unchanged
This is expected, bind mount to a container directory will result to the content in the directory to be obscured. If you mount to a named volume, the directory’s contents are copied into the volume.
docker run -d \
--name "my-server" \
-p 8000:8000 \
-v myvol:/src/config \
--restart unless-stopped \
my-server
Now if you run docker run -it -v myvol:/config --rm busybox ls /config you will see the copied content.
I have a volume which contains data that needs to stay persisted. When creating the volume for the first time, and mounting it to my node container, all container contents are copied to the volume, and everything behaves as expected. The issue is that when I change a few files in my node container, I remove the old image and container, and rebuild them from scratch. When running the updated container, the container's files don't get copied into the volume. This means that the volume still contains the old files, and therefore when the volume is mounted in the container, no updated functionality is present, and I have to remove and recreate the volume from scratch, which I can't do since the volume's data needs to be persisted.
Here is my dockerfile
FROM mcr.microsoft.com/dotnet/sdk:5.0
COPY CommandLineTool App/CommandLineTool/
COPY NeedBackupNodeServer App/NeedBackupNodeServer/
WORKDIR /App/NeedBackupNodeServer
RUN curl -sL https://deb.nodesource.com/setup_14.x | bash - \
&& apt update \
&& apt install -y nodejs
EXPOSE 5001
ENTRYPOINT ["node", "--trace-warnings", "index.js"]
Here are my commands and expected output
docker volume create nodeServer-and-commandLineTool-volume
docker build -t need-backup-image -f Dockerfile .
docker run -p 5001:5001 --name need-backup-container -v nodeServer-and-commandLineTool-volume:/App need-backup-image -a
when running
docker exec need-backup-container cat index.js
the file is present and contains the latest updates, since the volume was just created.
Now when I update some files, I need to rebuild the image and the container, so I run
docker rm need-backup-container
docker rmi need-backup-image
docker build -t need-backup-image -f Dockerfile .
docker run -p 5001:5001 --name need-backup-container -v nodeServer-and-commandLineTool-volume:/App need-backup-image -a
Now I thought that when running
docker exec need-backup-container cat index.js
I'd see the updated file changes, but nope, I only see the old files that were first created when the volume was mounted for the first time.
So my question is, is there anyway to achieve overwriting the volume's files when creating a container?
If your application needs persistent data, it should be stored in a different directory from the application code. This can be in a dedicated /data directory or in a subdirectory of your application; the important thing is that, when you mount a volume to hold the persistent data, it does not hide your application code.
In a Node application, for example, you could refer to a ./data for your data files:
import { open } from 'fs/promises';
import { join } from 'path';
const dataDir = process.env.DATA_DIR || 'data';
const fh = await open(join(dataDir, 'file.txt'), 'rw');
Then in your Dockerfile you'd need to create that directory. If you set up a non-root user, that directory, but not your code, should be owned by the user.
FROM node:lts
# Create the non-root user
RUN adduser --system --no-create-home nonroot
# Install the Node application normally
WORKDIR /app
COPY package*.json .
RUN npm ci
COPY index.js .
# Create the data directory
RUN mkdir data && chown nonroot data
# Specify how to run the container
USER nonroot
CMD ["node", "index.js"]
Then when you launch the container, mount the volume only on the data directory, not over the entire /app tree.
docker run \
-p 5001:5001 \
--name need-backup-container \
-v nodeServer-and-commandLineTool-volume:/app/data \
need-backup-image
# ^^^^^^^^^
Note that the Dockerfile as shown here would also let you use a host directory instead of a Docker named volume, and specify the host uid when you run the container. You do not need to make any changes to the image to do this.
docker run \
-p 5002:5001 \
--name same-image-with-bind-mount \
-u $(id -u) \
-v "$PWD/app-data:/app/data" \
need-backup-image
I try to create a docker container with bind9 on it and I want to add my db.personal-domain.com file but when I run docker build and then docker run -tdp 53:53 -v config:/etc/bind <image id> the container doesn't have my db.personal-domain.com file. How to fix that ? Thanks !
tree structure
-DNS
--Dockerfile
--config
---db.personal-domain.com
Dockerfile
FROM ubuntu:20.04
RUN apt-get update
RUN apt-get install -y bind9
RUN apt-get install -y bind9utils
WORKDIR /etc/bind
VOLUME ["/etc/bind"]
COPY config/db.personal-domain.com /etc/bind/db.personal-domain.com
EXPOSE 53/tcp
CMD ["/usr/sbin/named", "-g", "-c", "/etc/bind/named.conf", "-u", "bind"]
There is a syntax issue in your docker run -v option. If you use docker run -v name:/container/path (even if name matches a local file or directory) it mounts a named volume over your configuration directory. You want the host content, and you need -v /absolute/host/path:/container/path syntax (the host path must start with /). So (on Linux/MacOS):
docker run -d -p 53:53 \
-v "$PWD:config:/etc/bind" \
my/bind-image
In your image you're trying to COPY in the config file. This could work also; but it's being hidden by the volume mount, and also rendered ineffective by the VOLUME statement. (The most obvious effect of VOLUME is to prevent subsequent changes to the named directory; it's not required to mount a volume later.)
If you delete the VOLUME line from the Dockerfile, it should also work to run the container without the -v option at all. (But if you'll have different DNS configuration on different setups, this probably isn't what you want.)
-v config:/etc/bind
That syntax creates a named volume, called config. It looks like you wanted a host volume pointing to a path in your current directory, and for that you need to include a fully qualified path with the leading slash, e.g. using $(pwd) to generate the path:
-v "$(pwd)/config:/etc/bind"
I have to start each instance of my application as a separate Docker service. The base image is the same but the configuration file is different for each instance. Now, the problem is my application makes some changes to the configuration file. And I want the configuration changes to persist so that when my application restarts (as docker service), it uses the updated configuration.
I am able to use the config file as a mount point using docker config. But the problem is no matter what mode (rwx) I give, I am not able to update the config file from inside the container. The mounted config is always Read-only file system.
1. How do I make the changes to the config file from docker container?
2. How do I make the updated config file persist outside the container, so that on service restart, the updated configuration is used?
I did the following to decouple config file from the image/container:
docker config create my-config config.txt
docker service create \
--name redis \
--config src=my-config,target=/config.txt,mode=0660 \
redis:alpine
docker container exec -ti <containerId> /bin/sh
The config file is mounted at /config.txt but I am not able to edit it.
The config will be read only by design. But you can copy this to another file inside your container as part of an entrypoint script defined in your image.
docker config create my-config config.txt
docker service create \
--name redis \
--config src=my-config,target=/config.orig,mode=0660 \
username/redis:custom
The entrypoint script would include the following:
if [ ! -f /config.txt -a -f /config.orig ];
cp /config.orig /config.txt
fi
# skipping the typical exec command here since redis has its own entrypoint
# exec "$#" # run the CMD as pid 1
exec docker-entrypoint.sh "$#"
Your Dockerfile to build that image would look like:
FROM redis:alpine
COPY /entrypoint.sh /
ENTRYPOINT [ "/entrypoint.sh" ]
And you'd build that with:
docker build -t username/redis:custom .
Docker swarm configs are read-only, not only from inside the container but also from the outside. to update the config of your service you must create a new config as explained in the docker swarm config docs
How do I update the config of my service?
You need to to copy the config, edit it, save it with new name, and then update the service
# Get the config from docker to file
docker config inspect --pretty my-config | tail -n +6 > conf-file
# Edit conf-file as needed here
...
# Save it with new name
docker config create my-config-v2 conf-file
# Update the service
docker service update \
--config-rm my-config \
--config-add source=my-config-v2,target=/config.txt \
redis:alpine
How do I update the config from inside the container?
For this you'll need to have access to docker from inside the container. you can do so by mounting the docker executable and the docker sock to the container:
docker run -it -v /var/run/docker.sock:/var/run/docker.sock \
-v /usr/bin/docker:/usr/bin/docker \
ubuntu bash
I need to install a custom bundle in a dockerized servicemix image. To do so, I need to paste some files in the /etc directory of the servicemix image.
Could anyone help me doing this?
I've tried using the Dockerfile as follows:
But it simply doesn't work. I've looked through the documentation of the image, and the author tells me to use the command: docker run --volumes-from servicemix-data -it ubuntu bash and inspect the /servicemix, but it's empty.
Dockerfile:
FROM dskow/apache-servicemix
WORKDIR .
COPY ./docs /apache-servicemix/etc
...
Command suggested by the author:
docker run --volumes-from servicemix-data -it ubuntu bash
I was unfamiliar with this approach but, having looked at the source (link), I think this is what you want to do:
Create a container called servicemix-data that will become your volume:
docker run --name servicemix-data -v /servicemix busybox
Confirm this worked:
docker container ls --format="{{.ID}}\t{{.Names}}" --all
42b3bc4dbedf servicemix-data
...
Then you want to copy the files into this container:
docker cp ./docs servicemix-data:/etc
Finally, run servicemix using this container (with your files) as the source for its data:
docker run \
--detach \
--name=servicemix \
--volumes-from=servicemix-data \
dskow/apache-servicemix
HTH!
Changes in the container will be lost until it is committed back to the image.
You can use this docker file https://hub.docker.com/r/mkroli/servicemix/dockerfile and your copy statement just before the ENTRYPOINT.
COPY ./docs /opt/apache-servicemix/etc