docker-compose and listing volume contents - docker

Maybe I'm just not understanding correctly but I'm trying to visually verify that I have used volumes properly.
In my docker-compose I'd have something like
some-project:
volumes:
- /some-local-path/some-folder:/v-test
I can verify it's contents via "ls -la /some-local-path/some-folder"
In some-projects Dockerfile I'd have something like
RUN ls -la /v-test
which returns 'No such file or directory"
Is this the correct way to use it? If so, why can't I view the contents from inside the container?

Everything in the Dockerfile runs before anything outside the build: block in the docker-compose.yml file is considered. The image build doesn't see volumes or environment variables that get declared only in docker-compose.yml, and it can't access other services.
In your example, first the Dockerfile tries to ls the directory, then Compose will start the container with the bind mount.
If you're just doing this for verification, you can docker-compose run a container with most of its settings from the docker-compose.yml file, but an alternate command:
docker-compose run some-project \
ls -la /v-test
(Doing this requires that the image's CMD is a well-formed shell command; either it has no ENTRYPOINT or the ENTRYPOINT is a wrapper script that ends in exec "$#" to run the CMD. If you only have ENTRYPOINT, change it to CMD; if you've split the command across both directives, consolidate it into a single CMD line.)

Related

How to check content of a volume from Dockerfile or compose file?

I would like to have a docker volume to persist data. The persisted data can be accessed by different containers based on different images.
It is not a host volume. It is a volume listed in volumes panel of Docker Desktop.
For example, the name of the volume is theVolume which is mounted at /workspace. The directory I need to inspect is /workspace/project.
I need to check whether a specific directory is available inside the volume. If it is not, create the directory, else leave it as is.
Is it possible to do this from within a Dockerfile or compose file?
It's possible to do this in an entrypoint wrapper script. This runs as the main container process, so it's invoked after the volume is mounted in the container. The script isn't aware of what specific thing might be mounted on /workspace, so this will work whether you've mounted a named volume, a host directory, or nothing at all. It does need to make sure to actually start the main container command when it's done.
#!/bin/sh
# entrypoint.sh
# Create the project directory if it doesn't exist
if [ ! -d /workspace/project ]; then
mkdir /workspace/project
fi
# Run the main container command
exec "$#"
Make sure this file is executable on your host system (run chmod +x entrypoint.sh before checking it in). Make sure it's included in your Docker image, and then make this script be the image's ENTRYPOINT.
COPY entrypoint.sh ./ # if a previous `COPY ./ ./` doesn't already get it
ENTRYPOINT ["./entrypoint.sh"] # must use JSON-array syntax
CMD the main container command # same as you have now
(If you're using ENTRYPOINT for the main container command, you may need to change it to CMD for this to work; if you've split the interpreter into its own ENTRYPOINT line, combine the whole container command into a single CMD.)
A Dockerfile RUN command happens before the volume is mounted (or maybe even exists at all) and so it can't modify the volume contents. A Compose file doesn't have any way to run commands, beyond replacing the image's entrypoint and command.

Why some of the directory in docker container can be mount and share files out and some can not

I'm a new leaner of docker.I came a cross a problem while I'm trying to make my own docker image.
Here's the thing.I create a new DockerFile to build my own mysql image in which I declared MYSQL_ROOT_PASSWORD and put some init scripts in the container.
Here is my Docker
FROM mysql:5.7
MAINTAINER CarbonFace<553127022#qq.com>
ENV TZ Asia/Shanghai
ENV MYSQL_ROOT_PASSWORD Carbon#mysqlRoot7
ENV INIT_DATA_DIR /initData/sql
ENV INIT_SQL_FILE_0 privileges.sql
ENV INIT_SQL_FILE_1 carbon_user_sql.sql
ENV INIT_SQL_FILE_2 carbonface_sql.sql
COPY ./my.cnf /etc/mysql/donf.d/
RUN mkdir -p $INIT_DATA_DIR
COPY ./sqlscript/$INIT_SQL_FILE_0 $INIT_DATA_DIR/
COPY ./sqlscript/$INIT_SQL_FILE_1 $INIT_DATA_DIR/
COPY ./sqlscript/$INIT_SQL_FILE_2 $INIT_DATA_DIR/
COPY ./sqlscript/$INIT_SQL_FILE_0 /docker-entrypoint-initdb.d/
COPY ./sqlscript/$INIT_SQL_FILE_1 /docker-entrypoint-initdb.d/
COPY ./sqlscript/$INIT_SQL_FILE_2 /docker-entrypoint-initdb.d/
CMD ["mysqld"]
I'm trying to build a docker image which contains my own config file and when mounted it would be showed in the local directory and can be modified.
I'm really confused that when I start my container with this image like the official description and also here is my commands:
docker run -dp 3306:3306 \
-v /usr/local/mysql/data:/var/lib/mysql \
-v/usr/local/mysql/conf:/etc/mysql/conf.d \
--name mysql mysql:<my builded tag>
You know I'm trying to mounted the
/usr/local/mysql/conf to the /etc/mysql/conf.d in the container which is been told as the custom config file mounted location.
And I supposed that my custom config file my.cnf which has been copied into the image during docker build and would be show in my local direcroty /usr/local/mysql/conf
And since I already copied my custom config file into image which you can see in my DockerFile.
But it turns out that the directory is empty and the /etc/mysql/conf.d is also overwrite by local directory.
Before I run my container, both /usr/local/mysql/conf and /usr/local/mysql/data is empty at all.
OK fine, I've been told that the volume mounted directory would overwrite the file inside the container.
But how could the empty data directory shows the data files inside the container but the empty conf directory overwrite the conf.d directory in the container.
It make no sense.
I was very confused and I would be very appreciate it if someone can explain why it happens.
My OS is MacOS Big Sur and I used the latest docker.
A host-directory bind mount, -v /host/path:/container/path, always hides the contents of the image and replaces it with the host directory. If the host directory is empty, the container directory will be the same empty directory at container startup time.
The Docker Hub mysql container has an involved entrypoint script that checks to see if the data directory is empty, and if so, initializes the database; abstracted out
#!/bin/sh
# (actually in hundreds of lines of shell code, with more options)
if [ ! -d /var/lib/mysql/data/mysql ]; then
mysql_install_db
# (...and start a temporary database server and run the
# /docker-entrypoint-initdb.d scripts)
fi
# then run the main container command
exec "$#"
Simply the presence of a volume doesn't cause files to be copied (with one exception at one specific point in the lifecycle for named volumes), so if you need to copy content from a container to the host you either need to do it manually with docker cp or have a way in the container code to do it.

How can I copy files from the GitLab Runner helper container to the build container?

Set up
I set up GitLab Runner with a KubernetesExecutor and want to create a custom helper image which adds some extra files to the build container.
The current set up is quite basic:
A Dockerfile, which adds some files (start.sh and Dockerfile) into the container.
A start.sh file which is present in the helper image. This should be executed when the helper is run.
Code
start.sh
#!/bin/bash
printenv > /test.out # Check whether the script is run.
cp /var/Dockerfile /builds/Dockerfile # Copy file to shared volume.
exec "$#"
Dockerfile
FROM gitlab/gitlab-runner-helper:x86_64-latest
ADD templates/Dockerfile /var/Dockerfile
ADD start.sh /var/run/start.sh
ENTRYPOINT ["sh", "/var/run/start.sh"]
The shared volume between the containers is: /builds. As such, I'd like to copy /var/Dockerfile to /builds/Dockerfile.
Problem
I can't seem to find a way to (even) run start.sh when the helper image is executed. Using kubectl exec -it pod-id -c build bash and kubectl exec -it pod-id -c helper bash, I verify whether the files are created. When I run start.sh (manually) from the latter command, the files are copied. However, neither /test.out nor /builds/Dockerfile are available when logging in to the helper image initially.
Attempts
I've tried setting up a different CMD (/var/run/start.sh), but it seems like it simply doesn't run the sh file.

Is it possible to run script or executable on Docker container with docker-compose.yml only without Dockerfile

Scripts or executables can run on a Docker container automatically when running docker-compose up --build with a configured Dockerfile containing syntax RUN, through which a script or executables etc. can run automatically during build.
Question: But is it possible to achieve the same goal, say run executables or scripts, with docker-compose only without a Dockerfile? In this case there are probably the similar command in docker-compose.yml like the RUN in Dockerfile ?
What you can do in docker-compose is overriding the default command that is executed after the build, by setting "command".
See here: https://docs.docker.com/compose/compose-file/#command
I don't think there is a RUN-like thing for docker-compose.yml.
if I understand your problem, then the answer is yes: on your container configuration in your docker-compose.yml file use:
entrypoint: ["/bin/sh","-c"]
command:
- |
ls -la
echo 'hello'
or whatever is your commands you want to run.

When does the docker volume become available for CMD script?

I have a container that links a volume to local path. This volume is used in script that is run by Dockerfile CMD.
I noticed that often this path does not exist at the time when CMD script is executed.
Is there anything I can do to guarantee that volume binding exist at time when CMD is run?
Example (although is trivial)
docker-compose:
...
volumes:
- /foo:/bar
...
script:
...
cat /bar/run.txt
...
Volumes work on container when you run it. CMD is Dockerfile command and it's about image building time. If you want to execute some scripts on image building stage you can add it by next command:
ADD localScript.sh /scriptForImage.sh

Resources