Entrypoint can't execute command - docker

I don't understand why my entrypoint can't execute my command. My entrypoint look like this:
#!/bin/bash
...
exec "$#"
My script is existing I can run it when I go inside my container:
drwxrwxrwx 1 root root 512 mars 25 09:07 .
drwxrwxrwx 1 root root 512 mars 25 09:07 ..
-rwxrwxrwx 1 root root 128 mars 25 10:05 entrypoint.sh
-rwxrwxrwx 1 root root 481 mars 25 09:07 init-dev.sh
-rwxrwxrwx 1 root root 419 mars 25 10:02 migration.sh
root#0c0062fbf916:/app/scripts# pwd
/app/scripts
And when I run my container : docker run my_container "scripts/migration.sh"
I got this error:
scripts/entrypoint.sh: line 8: /app/scripts/migration.sh: No such file or directory
I have the same error if I just run ls -all
docker run my_container "ls -all"
exec: ls -all: not found
I'm switching linux to windows <-> windows to linux so I checked to change lf to crlf but there is no changes

Your first command doesn't work because your scripts are in /app/scripts (note the plural), but you're trying to run run script/migration.sh. Additionally, it's not clear what the current working directory is in your container: even if you wrote scripts/migration.sh, that would only work if either (a) your Dockerfile contains a WORKDIR /app, or if your docker run command line includes -w /app. You would be better off using a fully qualified path:
docker run mycontainer /app/scripts/migration.sh
Your second example (docker run my_container "ls -all") is over-quoted and would never work. You need to write docker run my_container ls -all, except that -all isn't actually an option that ls accepts, although it will work by virtue of being the combination of the -a and -l options.

Related

Docker-Compose bind volume only if exists

I have a volume which uses bind to share a local directory. Sometimes this directory doesn't exist and everything goes to shit. How can I tell docker-compose to look for the directory and use it if it exists or to continue without said volume if errors?
Volume example:
- type: bind
read_only: true
source: /srv/share/
target: /srv/share/
How can I tell docker-compose to look for the directory and use it if it exists or to continue without said volume if errors?
As far I am aware you can't do conditional logic to mount a volume, but i am getting around it in a project of mine, like this:
version: "2.1"
services:
elixir:
image: elixir:alpine
volumes:
- ${VOLUME_SOURCE:-/dev/null}:${VOLUME_TARGET:-/.devnull}:ro
Here I am using /dev/null as the fallback, but in my real project I just use an empty file to do the mapping.
This ${VOLUME_SOURCE:-/dev/null} is how bash works with default values for variables not set, and docker compose supports them.
Testing it without setting the env vars
$ sudo docker-compose run --rm elixir sh
/ # ls -al /.devnull
crw-rw-rw- 1 root root 1, 3 May 21 12:27 /.devnull
Testing it with the env vars set
Creating the .env file:
$ printf "VOLUME_SOURCE=./testing \nVOLUME_TARGET=/testing\n" > .env && cat .env
VOLUME_SOURCE=./testing
VOLUME_TARGET=/testing
Creating the volume for test purposes:
$ mkdir testing && touch testing/test.txt && ls -al testing
total 8
drwxr-xr-x 2 exadra37 exadra37 4096 May 22 13:12 .
drwxr-xr-x 3 exadra37 exadra37 4096 May 22 13:12 ..
-rw-r--r-- 1 exadra37 exadra37 0 May 22 13:12 test.txt
Running the container:
$ sudo docker-compose run --rm elixir sh
/ # ls -al /testing/
total 8
drwxr-xr-x 2 1000 1000 4096 May 22 12:01 .
drwxr-xr-x 1 root root 4096 May 22 12:07 ..
-rw-r--r-- 1 1000 1000 0 May 22 12:01 test.txt
/ #
I don't think there is a way to do that with the docker-compose syntax easily yet, here is how I went, but the container would not start at all if there is no volume.
check the launch command with a docker inspect on the unpatched container
change your command with something like this (here using egorive/seafile-mc:8.0.7-rpi on a raspberry pi, where the data is on an external disk that might not always be plugged):
volumes:
- '/data/seafile-data:/shared:Z'
command: sh -c "( [ -f /shared/.docker-volume-check ] || ( echo volume not mounted, not starting; sleep 60; exit 1 )) && exec /sbin/my_init -- /scripts/start.py"
restart: always
touch .docker-volume-check in the root of your volume
That way, you have a restartable container, that would fail and wait if the volume is not mounted. It also supports volume in a generic way: for instance, when you just create a new container that has not initialized its volume yet with a first setup, it will still boot as you're checking a file you created.

Docker - Can mount an NFS share into a container but not a sub-directory of it

I have an NFS share with the following properties:
Mounted on my host on /nfs/external_disk
Owner user is test_user with UID 1234
Group is test_group with GID 2222
Permissions is 750
I have a small Dockerfile with the following content
ARG tag=lts
from jenkins/jenkins:${tag}
user root
# Create a new user and new group that matches what is on the host.
ARG username=test_user
ARG groupname=test_group
ARG uid=1234
ARG gid=2222
RUN groupadd -g ${gid} ${groupname} && \
mkdir -p /users && \
useradd -l -m -u ${uid} -g ${groupname} -s /bin/bash -d /users/${username} ${username}
user ${username}
After building the image (named custom_jenkins), and when I run the following command, the container is started properly and I see the original Jenkins homer stuff now copied to the share.
docker run -td --rm -v /nfs/external_disk:/var/jenkins_home custom_jenkins
However if I want to mount a sub-directory of the NFS share, say ${NFS_SHARE}/jenkins_home, then I get an error:
docker run -td --rm -v /nfs/external_disk/jenkins_home:/var/jenkins_home custom_jenkins
docker: Error response from daemon: error while creating mount source path '/nfs/external_disk/jenkins_home': mkdir /nfs/external_disk/jenkins_home: permission denied.
Now even if I attempt to create the sub-directory myself before starting the container, I still get the same error. Even when I set the permissions of the sub-directory to be 777.
Note that I am running as test_user which has the same UID/GID as in the container and it actually owns the NFS share.
I have a feeling that when docker attempts to create a sub-directory, it attempts to create it as some different user (e.g. the "docker" user) which causes it to fail while creating the folder since it has no access inside the share.
Can anyone help? thanks in advance.
I tried to reproduce. Works just fine for me. Perhaps I am missing some constraint. Hope this helps anyway. Note at step 6 the owner and the group for the file that I created from the container. This might answer one of your questions.
Step 1: I created a NFS share somewhere in my LAN
Step 2: I mounted the share on the machine that's running the docker engine
sudo mount 192.168.0.xxx:/i-data/b4024d5b/nfs/NFS /mnt/nsa320/
neo#neo-desktop:nsa320$ mount | grep NFS
192.168.0.xxx:/i-data/b4024d5b/nfs/NFS on /mnt/nsa320 type nfs (rw,relatime,vers=3,rsize=32768,wsize=32768,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.168.0.xxx,mountvers=3,mountport=3775,mountproto=udp,local_lock=none,addr=192.168.0.xxx)
Step 3: I created some sample files and a sub-directory:
neo#neo-desktop:nsa320$ ls -la /mnt/nsa320/
total 12
drwxrwxrwx 3 root root 4096 Jul 21 22:54 .
drwxr-xr-x 3 root root 4096 Jul 21 22:41 ..
-rw-r--r-- 1 neo neo 0 Jul 21 22:45 dummyFile
-rw-r--r-- 1 root root 0 Jul 21 22:53 fileCreatedFromContainer << THIS WAS CREATED FROM A CONTAINER THAT WAS NOT LAUNCHED WITH THE --user OPTION
drwxr-xr-x 2 neo neo 4096 Jul 21 22:54 subfolder
Step 4: Launched a dummy container and mounted the sub-directory (1000 is the UID of the user neo in the my OS):
docker run -d -v /mnt/nsa320/subfolder:/var/externalMount --user 1000 alpine tail -f /dev/null
Step 5: Connected in container to check the mount(I can read and write in the subfolder located on the NFS)
neo#neo-desktop:nsa320$ docker exec -ti ded1dc79773e sh
/ $ ls /var/externalMount/
fileInSubfolder
/ $ touch /var/externalMount/fileInSubfolderCreatedFromContainer
Step 6: Back on the host, to whom does the file that I created from the container belongs to:
neo#neo-desktop:nsa320$ ls -la /mnt/nsa320/subfolder/
total 8
drwxr-xr-x 2 neo neo 4096 Jul 21 23:23 .
drwxrwxrwx 3 root root 4096 Jul 21 22:54 ..
-rw-r--r-- 1 neo neo 0 Jul 21 22:54 fileInSubfolder
-rw-r--r-- 1 neo root 0 Jul 21 23:23 fileInSubfolderCreatedFromContainer
Maybe off-topic: whoami executed in the container returns just the UID:
$ whoami
whoami: unknown uid 1000

sh: ./bc2influx: not found when entering in a stopped container

I have an issue when running a docker container.
➜ bc_to_influx git:(master) ✗ docker run registry.gitlab.com/xxx/bc_to_influx:latest
standard_init_linux.go:207: exec user process caused "no such file or directory"
When I debug, I enter in the stopped container:
docker commit 0db73216baaf user/test_image
docker run -ti --entrypoint=sh user/test_image
on ls command, I can only my executable:
/bc2influx # ls -al
total 20552
drwxr-xr-x 1 root root 4096 Jun 6 10:32 .
drwxr-xr-x 1 root root 4096 Jun 6 11:53 ..
-rwxr-xr-x 1 root root 21034520 Jun 6 10:29 bc2influx
/bc2influx #
but when I try to execute, I get:
/bc2influx # ./bc2influx
sh: ./bc2influx: not found
I can vi, cat the execute, but not execute it
here is my Dockerfile
FROM alpine
WORKDIR /bc2influx/
COPY ./release/bc2influx /bc2influx/
RUN ls -al /bc2influx/
CMD [ "./bc2influx" ]
I previously build my executable with:
go build -o ./release/bc2influx -v -ldflags '-extldflags "-static"' ./...
Any idea what's going on ?
Looks like musl library issue try this build command go build -ldflags="-s -w".

File ownership after docker cp

How can I control which user owns the files I copy in and out of a container?
The docker cp command says this about file ownership:
The cp command behaves like the Unix cp -a command in that directories are copied recursively with permissions preserved if possible. Ownership is set to the user and primary group at the destination. For example, files copied to a container are created with UID:GID of the root user. Files copied to the local machine are created with the UID:GID of the user which invoked the docker cp command. However, if you specify the -a option, docker cp sets the ownership to the user and primary group at the source.
It says that files copied to a container are created as the root user, but that's not what I see. I create two files owned by user id 1005 and 1006. Those owners are translated into the container's user namespace. The -a option seems to make no difference when I copy the file into a container.
$ sudo chown 1005:1005 test.txt
$ ls -l test.txt
-rw-r--r-- 1 1005 1005 29 Oct 6 12:43 test.txt
$ docker volume create sandbox1
sandbox1
$ docker run --name run1 -vsandbox1:/data alpine echo OK
OK
$ docker cp test.txt run1:/data/test1005.txt
$ docker cp -a test.txt run1:/data/test1005a.txt
$ sudo chown 1006:1006 test.txt
$ docker cp test.txt run1:/data/test1006.txt
$ docker cp -a test.txt run1:/data/test1006a.txt
$ docker run --rm -vsandbox1:/data alpine ls -l /data
total 16
-rw-r--r-- 1 1005 1005 29 Oct 6 19:43 test1005.txt
-rw-r--r-- 1 1005 1005 29 Oct 6 19:43 test1005a.txt
-rw-r--r-- 1 1006 1006 29 Oct 6 19:43 test1006.txt
-rw-r--r-- 1 1006 1006 29 Oct 6 19:43 test1006a.txt
When I copy files out of the container, they are always owned by me. Again, the -a option seems to do nothing.
$ docker run --rm -vsandbox1:/data alpine cp /data/test1006.txt /data/test1007.txt
$ docker run --rm -vsandbox1:/data alpine chown 1007:1007 /data/test1007.txt
$ docker cp run1:/data/test1006.txt .
$ docker cp run1:/data/test1007.txt .
$ docker cp -a run1:/data/test1006.txt test1006a.txt
$ docker cp -a run1:/data/test1007.txt test1007a.txt
$ ls -l test*.txt
-rw-r--r-- 1 don don 29 Oct 6 12:43 test1006a.txt
-rw-r--r-- 1 don don 29 Oct 6 12:43 test1006.txt
-rw-r--r-- 1 don don 29 Oct 6 12:47 test1007a.txt
-rw-r--r-- 1 don don 29 Oct 6 12:47 test1007.txt
-rw-r--r-- 1 1006 1006 29 Oct 6 12:43 test.txt
$
You can also change the ownership by logging in as root user into the container :
docker exec -it --user root <container-id> /bin/bash
chown -R <username>:<groupname> <folder/file>
In addition to #Don Kirkby's answer, let me provide a similar example in bash/shell script for the case that you want to copy something into a container while applying different ownership and permissions than those of the original file.
Let's create a new container from a small image that will keep running by itself:
docker run -d --name nginx nginx:alpine
Now wel'll create a new file which is owned by the current user and has default permissions:
touch foo.bar
ls -ahl foo.bar
>> -rw-rw-r-- 1 my-user my-group 0 Sep 21 16:45 foo.bar
Copying this file into the container will set ownership and group to the UID of my user and preserve the permissions:
docker cp foo.bar nginx:/foo.bar
docker exec nginx sh -c 'ls -ahl /foo.bar'
>> -rw-rw-r-- 1 4098 4098 0 Sep 21 14:45 /foo.bar
Using a little tar work-around, however, I can change the ownership and permissions that are applied inside of the container.
tar -cf - foo.bar --mode u=+r,g=-rwx,o=-rwx --owner root --group root | docker cp - nginx:/
docker exec nginx sh -c 'ls -ahl /foo.bar'
>> -r-------- 1 root root 0 Sep 21 14:45 /foo.bar
tar options explained:
c creates a new archive instead of unpacking one.
f - will write to stdout instead of a file.
foo.bar is the input file to be packed.
--mode specifies the permissions for the target. Similar to chown, they can be given in symbolic notation or as an octal number.
--owner sets the new owner of the file.
--group sets the new group of the file.
docker cp - reads the file that is to be copied into the container from stdin.
This approach is useful when a file needs to be copied into a created container before it starts, such that docker exec is not an option (which can only operate on running containers).
Just a one-liner (similar to #ramu's answer), using root to make the call:
docker exec -u 0 -it <container-id> chown node:node /home/node/myfile
In order to get complete control of file ownership, I used the tar stream feature of docker cp:
If - is specified for either the SRC_PATH or DEST_PATH, you can also stream a tar archive from STDIN or to STDOUT.
I launch the docker cp process, then stream a tar file to or from the process. As the tar entries go past, I can adjust the ownership and permissions however I like.
Here's a simple example in Python that copies all the files from /outputs in the sandbox1 container to the current directory, excludes the current directory so its permissions don't get changed, and forces all the files to have read/write permissions for the user.
from subprocess import Popen, PIPE, CalledProcessError
import tarfile
def main():
export_args = ['sudo', 'docker', 'cp', 'sandbox1:/outputs/.', '-']
exporter = Popen(export_args, stdout=PIPE)
tar_file = tarfile.open(fileobj=exporter.stdout, mode='r|')
tar_file.extractall('.', members=exclude_root(tar_file))
exporter.wait()
if exporter.returncode:
raise CalledProcessError(exporter.returncode, export_args)
def exclude_root(tarinfos):
print('\nOutputs:')
for tarinfo in tarinfos:
if tarinfo.name != '.':
assert tarinfo.name.startswith('./'), tarinfo.name
print(tarinfo.name[2:])
tarinfo.mode |= 0o600
yield tarinfo
main()

Add a new entrypoint to a docker image

Recently, we decided to move one of our services to docker container. The service is product of another company and they have provided us the docker image. However, we need to do some extra configuration steps in the container entrypoint.
The first thing I tried, was to create a DockerFile from the base image and then add commands to do the extra steps, like this:
From baseimage:tag
RUN chmod a+w /path/to/entrypoint_creates_this_file
But, it failed, because these extra steps must be run after running the base container entrypoint.
Is there any way to extend entrypoint of a base image? if not, what is the correct way to do this?
Thanks
I finally ended up calling the original entrypoint bash script in my new entrypoint bash script, before doing other extra configuration steps.
You do not need to even create a new Dockerfile. To modify the entrypoint you can just run the image using the command such as below:
docker run --entrypoint new-entry-point-cmd baseimage:tag <optional-args-to-entrypoint>
create your custom entry-point file
-> add this to image
-> specify this as your entrypoint file
FROM image:base
COPY /path/to/my-entry-point.sh /my-entry-point.sh
// do sth here
ENTRYPOINT ["/my-entry-point.sh"]
Let me take an example with certbot. Using the excellent answer from Anoop, we can get an interactive shell (-ti) into a temporary container (--rm) with this image like so:
$ docker run --rm -ti --entrypoint /bin/sh certbot/certbot:latest
But what if we want to run a command after the original entry point, like the OP requested? We could run a shell and join the commands as in the following example:
$ docker run --rm --entrypoint /bin/sh certbot/certbot:latest \
-c "certbot --version && touch i-can-do-nice-things-here && ls -lah"
certbot 1.30.0
total 28K
drwxr-xr-x 1 root root 4.0K Oct 5 15:10 .
drwxr-xr-x 1 root root 4.0K Sep 7 18:10 ..
-rw-r--r-- 1 root root 0 Oct 5 15:10 i-can-do-nice-things-here
drwxr-xr-x 1 root root 4.0K Sep 7 18:10 src
drwxr-xr-x 1 root root 4.0K Sep 7 18:10 tools
Background
If I run it with the original entrypoint I will get this:
$ docker run --rm certbot/certbot:latest
Saving debug log to /var/log/letsencrypt/letsencrypt.log Certbot
doesn't know how to automatically configure the web server on this
system. However, it can still get a certificate for you. Please run
"certbot certonly" to do so. You'll need to manually configure your
web server to use the resulting certificate.
Or:
$ docker run --rm certbot/certbot:latest --version
certbot 1.30.0
I can see the entrypoint with docker inspect:
$ docker inspect certbot/certbot:latest | grep -i entry -C 2
},
"WorkingDir": "/opt/certbot",
"Entrypoint": null,
"OnBuild": null,
"Labels": null
--
},
"WorkingDir": "/opt/certbot",
"Entrypoint": [
"certbot"
],
If /bin/sh doesn't work in your container, try /bin/bash.

Resources