Docker `docker-entrypoint-initdb` not getting linked - docker

I'm trying to add an init.sh script to the docker-entrypoint-initdb.d so I can finish provisioning my DB in a docker container. The script is in a scripts directory in my local directory where the Dockerfile lives. The Dockerfile is simply:
FROM glats/alpine-lamp
ENV MYSQL_ROOT_PASSWORD=password
The build command works and completes with no errors, and then when I try to run the container it also runs fine, with the linked volume with the init script:
docker run -d --name mydocker -p 8080:80 -it mydocker \
-v ~/Docker/scripts:/docker-entrypoint-initdb.d
However when I log into the running container, I don't see any docker-entrypoint-initdb.d directory, and obviously the init.sh never runs:
/ # ls /
bin etc media proc sbin tmp
dev home mnt root srv usr
entry.sh lib opt run sys var
Does anyone know why the volume isn't getting mounted?

There is no such logic defined in the Docker image that you are using, the entrypoint of the above image just starts MySQL and httpd and does not any ability to construct Database from entrypoint.
If you want to have the ability to run init script using mysql-entrypoint and construct the database you need to use offical image.
Initializing a fresh instance
When a container is started for the first time, a new database with
the specified name will be created and initialized with the provided
configuration variables. Furthermore, it will execute files with
extensions .sh, .sql and .sql.gz that are found in
/docker-entrypoint-initdb.d.
Also running container, better to use one process per container. you can look into this docker-compose file that runs the stack as per rule of container "one process per container"

Related

Apache/Nifi 1.12.1 Docker Image Issue

I have a Dockerfile based on apache/nifi:1.12.1 and want to expand it like this:
FROM apache/nifi:1.12.1
RUN mkdir -p /opt/nifi/nifi-current/conf/flow
Thing is that the folder isn't created when I'm building the image from Linux distros like Ubuntu and CentOS. Build succeeds, I run it with docker run -it -d --rm --name nifi nifi-test but when I enter the container through docker exec there's no flow dir.
Strange thing is, that the flow dir is being created normally when I'm building the image through Windows and Docker Desktop. I can't understand why is this happening.
I've tried things such as USER nifi or RUN chown ... but still...
For your convenience, this is the base image:
https://github.com/apache/nifi/blob/rel/nifi-1.12.1/nifi-docker/dockerhub/Dockerfile
Take a look at this as well:
This is what looks like at the CLI
Thanks in advance.
By taking a look at the dockerfile provided you can see the following volume definition
Then if you run
docker image inspect apache/nifi:1.12.1
As a result, when you execute the RUN command to create a folder under the conf directory it succeeds
BUT when you run the container the volumes are mounted and as a result they overwrite everything that is under the mountpoint /opt/nifi/nifi-current/conf
In your case the flow directory.
You can test this by editing your Dockerfile
FROM apache/nifi:1.12.1
# this will be overriden, by volumes
RUN mkdir -p /opt/nifi/nifi-current/conf/flow
# this will be available in the container environment
RUN mkdir -p /opt/nifi/nifi-current/flow
To tackle this you could
clone the Dockerfile of the image you use as base one (the one in
FROM) and remove the VOLUME directive manually. Then build it and
use in your FROM as base one.
You could try to avoid adding directories under the mount points specified in the Dockerfile

Why docker run can't find file which was copied during build

Dockerfile
FROM centos
RUN mkdir /test
#its ensured that sample.sh exists where the dockerFile exists & being run
COPY ./sample.sh /test
CMD ["sh", "/test/sample.sh"]
Docker run cmd:
docker run -d -p 8081:8080 --name Test -v /home/Docker/Container_File_System:/test test:v1
Log output :
sh: /test/sample.sh: No such file or directory
There are 2 problems here.
The output says sh: /test/sample.sh: No such file or directory
as I have mapped a host folder to container folder, I was expecting the test folder & the sample.sh to be available at /home/Docker/Container_File_System post run, which did not happen
Any help is appreciated.
When you map a folder from the host to the container, the host files become available in the container. This means that if your host has file a.txt and the container has b.txt, when you run the container the file a.txt becomes available in the container and the file b.txt is no longer visible or accessible.
Additionally file b.txt is not available in the host at anytime.
In your case, since your host does not have sample.sh, the moment you mount the directory, sample.sh is no longer available in the container (which causes the error).
What you want to do is copy the sample.sh file to the correct directory in the host and then start the container.
The problem is in volume mapping. If I create a volume and map it subsequently it works fine, but directly mapping host folder to container folder does not work.
Below worked fine
docker volume create my-vol
docker run -d -p 8081:8080 --name Test -v my-vol:/test test:v1

Passing local CQL commands file to Cassandra Docker container

Is it possible to pass a local file for CQL commands to a Cassandra Docker container?
Using docker exec fails as it cannot find the local file:
me#meanwhileinhell:~$ ls -al
-rw-r--r-- 1 me me 1672 Sep 28 11:02 createTables.cql
me#meanwhileinhell:~$ docker exec -i cassandra_1 cqlsh -f createTables.cql
Can't open 'createTables.cql': [Errno 2] No such file or directory: ‘createTables.cql'
I would really like not to have to open a bash session and run a script that way.
The container needs to be able to access the script first before you can execute it (i.e. the script file needs to be inside the container). If this is just a quick one-off run of the script, the easiest thing to do is probably to just use the docker cp command to copy the script from your host to the container:
$ docker cp createTables.cql container_name:/path/in/container
You should then be able to use docker exec to run the script at whatever path you copied it to inside the container. If this is something that's a work in progress and you might be changing and re-running the script while you're working on it, you might be better off mounting a directory with your scripts from your host inside the container. For that you'll probably want the -v option of docker run.
Hope that helps!
If you want docker container sees files in host system, the only way is to map volume. You can mapped current directory to /tmp and run command again docker exec -i cassandra_1 cqlsh -f /tmp/createTables.cql

Docker run -v mounting a file as a directory

I'm trying to mount a directory containing a file some credentials onto a container with the -v flag, but instead of mounting that file as a file, it mounts it as a directory:
Run script:
$ mkdir creds/
$ cp key.json creds/key.json
$ ls -l creds/
total 4
-rw-r--r-- 1 root root 2340 Oct 12 22:59 key.json
$ docker run -v /*pathToWorkingDirectory*/creds:/creds *myContainer* *myScript*
When I look at the debug spew of the docker run command, I see that it creates the /creds/ directory, but for some reason creates key.json as a subdirectory under that, rather than copying the file.
I've seen some other posts saying that if you tell docker run to mount a file it can't find, it will create a directory on the container with the filename, but since I didn't specify the filename and it knew to name the new directory 'key.json' it seems like it was able to find the file, but created it as a directory anyway? Has anyone run into this before?
In case it's relevant, the script is being run in Docker-in-Docker in another container as part of GitLab's CI process.
You are running Docker-in-Docker, this means when you specify a -v volume, Docker will look for this directory on the host since the shared sock enabling Docker-in-Docker actualy means your run command starts a container alongside the runner container.
I explain this in more detail in this SO answer:
https://stackoverflow.com/a/46441426/2078207
Also notice the comment below this answer to get a direction for a solution.

docker: Only execute command in when container is running

I want to create a backup of a mount volume in my docker container.
This is the command in my dockerfile:
RUN tar -cvpzf test.tar -C /test/ .
But the problem is that it can only be executed after the mount of my volume. (because my volume will be mounted to /test/
So this command needs to be executed after starting the docker container and not when I'm creating the image. How do I have to perform this?
Thanks
Once your container is running, assuming the container has tar, you can do exactly what you want with:
docker exec nameofcontainer [options] tar -cvpzf test.tar -C /test/ .
You can get the names of running containers using docker ps. For options, you may want to use -ti so that you can see the output.
You could also build the container with a custom ENTRYPOINT or CMD which will both start whatever the primary job of the container is going to be and run your backup script, as well as any other tasks that need performing.
The official mysql container does something like this, with the docker-entrypoint.sh script.

Resources