Add a new entrypoint to a docker image - docker

Recently, we decided to move one of our services to docker container. The service is product of another company and they have provided us the docker image. However, we need to do some extra configuration steps in the container entrypoint.
The first thing I tried, was to create a DockerFile from the base image and then add commands to do the extra steps, like this:
From baseimage:tag
RUN chmod a+w /path/to/entrypoint_creates_this_file
But, it failed, because these extra steps must be run after running the base container entrypoint.
Is there any way to extend entrypoint of a base image? if not, what is the correct way to do this?
Thanks

I finally ended up calling the original entrypoint bash script in my new entrypoint bash script, before doing other extra configuration steps.

You do not need to even create a new Dockerfile. To modify the entrypoint you can just run the image using the command such as below:
docker run --entrypoint new-entry-point-cmd baseimage:tag <optional-args-to-entrypoint>

create your custom entry-point file
-> add this to image
-> specify this as your entrypoint file
FROM image:base
COPY /path/to/my-entry-point.sh /my-entry-point.sh
// do sth here
ENTRYPOINT ["/my-entry-point.sh"]

Let me take an example with certbot. Using the excellent answer from Anoop, we can get an interactive shell (-ti) into a temporary container (--rm) with this image like so:
$ docker run --rm -ti --entrypoint /bin/sh certbot/certbot:latest
But what if we want to run a command after the original entry point, like the OP requested? We could run a shell and join the commands as in the following example:
$ docker run --rm --entrypoint /bin/sh certbot/certbot:latest \
-c "certbot --version && touch i-can-do-nice-things-here && ls -lah"
certbot 1.30.0
total 28K
drwxr-xr-x 1 root root 4.0K Oct 5 15:10 .
drwxr-xr-x 1 root root 4.0K Sep 7 18:10 ..
-rw-r--r-- 1 root root 0 Oct 5 15:10 i-can-do-nice-things-here
drwxr-xr-x 1 root root 4.0K Sep 7 18:10 src
drwxr-xr-x 1 root root 4.0K Sep 7 18:10 tools
Background
If I run it with the original entrypoint I will get this:
$ docker run --rm certbot/certbot:latest
Saving debug log to /var/log/letsencrypt/letsencrypt.log Certbot
doesn't know how to automatically configure the web server on this
system. However, it can still get a certificate for you. Please run
"certbot certonly" to do so. You'll need to manually configure your
web server to use the resulting certificate.
Or:
$ docker run --rm certbot/certbot:latest --version
certbot 1.30.0
I can see the entrypoint with docker inspect:
$ docker inspect certbot/certbot:latest | grep -i entry -C 2
},
"WorkingDir": "/opt/certbot",
"Entrypoint": null,
"OnBuild": null,
"Labels": null
--
},
"WorkingDir": "/opt/certbot",
"Entrypoint": [
"certbot"
],
If /bin/sh doesn't work in your container, try /bin/bash.

Related

Entrypoint can't execute command

I don't understand why my entrypoint can't execute my command. My entrypoint look like this:
#!/bin/bash
...
exec "$#"
My script is existing I can run it when I go inside my container:
drwxrwxrwx 1 root root 512 mars 25 09:07 .
drwxrwxrwx 1 root root 512 mars 25 09:07 ..
-rwxrwxrwx 1 root root 128 mars 25 10:05 entrypoint.sh
-rwxrwxrwx 1 root root 481 mars 25 09:07 init-dev.sh
-rwxrwxrwx 1 root root 419 mars 25 10:02 migration.sh
root#0c0062fbf916:/app/scripts# pwd
/app/scripts
And when I run my container : docker run my_container "scripts/migration.sh"
I got this error:
scripts/entrypoint.sh: line 8: /app/scripts/migration.sh: No such file or directory
I have the same error if I just run ls -all
docker run my_container "ls -all"
exec: ls -all: not found
I'm switching linux to windows <-> windows to linux so I checked to change lf to crlf but there is no changes
Your first command doesn't work because your scripts are in /app/scripts (note the plural), but you're trying to run run script/migration.sh. Additionally, it's not clear what the current working directory is in your container: even if you wrote scripts/migration.sh, that would only work if either (a) your Dockerfile contains a WORKDIR /app, or if your docker run command line includes -w /app. You would be better off using a fully qualified path:
docker run mycontainer /app/scripts/migration.sh
Your second example (docker run my_container "ls -all") is over-quoted and would never work. You need to write docker run my_container ls -all, except that -all isn't actually an option that ls accepts, although it will work by virtue of being the combination of the -a and -l options.

docker ADD --chown bug or feature?

I am having a problem adding a file to an image and setting ownership via --chown flag. Specifically, here is a dockerfile adding a simple text file:
FROM fedora:24
ARG user_name=slave
ARG user_uid=1000
ARG user_home=/home/$user_name/
RUN useradd -l -u ${user_uid} -ms /bin/bash $user_name
WORKDIR ${user_home}
USER ${user_name}
ADD --chown=1397765041:1397765041 test.txt ./
CMD ls -l
This results in expected ownership of text.txt as can be seen:
$ docker run --rm -it bm/tmp:latest
total 4
-rw-r--r-- 1 some_user 1397765041 6 Oct 21 20:00 test.txt
Cool. Now if I change test.txt to a tar file (for example boost_1_57_0.tar.bz2), and rebuild, this is what I get:
$ docker run --rm -it bm/tmp:latest
total 4
drwx------ 8 501 root 4096 Oct 31 2014 boost_1_57_0
Here is how I am building (probably doesn't matter tho):
docker build -t bm/tmp --build-arg user_name=some_user --build-arg user_uid=1397765041 .
As we can see, ownership is NOT as expected in this case. It seems the behavior of --chown differs from the two cases shown above. I know that ADD automatically extracts tars. I don't know how the ownership is being set in the case where the file is a tar file. Anyone?
Unfortunately, ADD --chown only works for regular files. ADD with a tarball uses the ownership and permissions listed inside in tarball.
Workarounds:
Run tar yourself with --owner/--owner-map/--group/--group-map.
chown -R after the ADD.

File ownership after docker cp

How can I control which user owns the files I copy in and out of a container?
The docker cp command says this about file ownership:
The cp command behaves like the Unix cp -a command in that directories are copied recursively with permissions preserved if possible. Ownership is set to the user and primary group at the destination. For example, files copied to a container are created with UID:GID of the root user. Files copied to the local machine are created with the UID:GID of the user which invoked the docker cp command. However, if you specify the -a option, docker cp sets the ownership to the user and primary group at the source.
It says that files copied to a container are created as the root user, but that's not what I see. I create two files owned by user id 1005 and 1006. Those owners are translated into the container's user namespace. The -a option seems to make no difference when I copy the file into a container.
$ sudo chown 1005:1005 test.txt
$ ls -l test.txt
-rw-r--r-- 1 1005 1005 29 Oct 6 12:43 test.txt
$ docker volume create sandbox1
sandbox1
$ docker run --name run1 -vsandbox1:/data alpine echo OK
OK
$ docker cp test.txt run1:/data/test1005.txt
$ docker cp -a test.txt run1:/data/test1005a.txt
$ sudo chown 1006:1006 test.txt
$ docker cp test.txt run1:/data/test1006.txt
$ docker cp -a test.txt run1:/data/test1006a.txt
$ docker run --rm -vsandbox1:/data alpine ls -l /data
total 16
-rw-r--r-- 1 1005 1005 29 Oct 6 19:43 test1005.txt
-rw-r--r-- 1 1005 1005 29 Oct 6 19:43 test1005a.txt
-rw-r--r-- 1 1006 1006 29 Oct 6 19:43 test1006.txt
-rw-r--r-- 1 1006 1006 29 Oct 6 19:43 test1006a.txt
When I copy files out of the container, they are always owned by me. Again, the -a option seems to do nothing.
$ docker run --rm -vsandbox1:/data alpine cp /data/test1006.txt /data/test1007.txt
$ docker run --rm -vsandbox1:/data alpine chown 1007:1007 /data/test1007.txt
$ docker cp run1:/data/test1006.txt .
$ docker cp run1:/data/test1007.txt .
$ docker cp -a run1:/data/test1006.txt test1006a.txt
$ docker cp -a run1:/data/test1007.txt test1007a.txt
$ ls -l test*.txt
-rw-r--r-- 1 don don 29 Oct 6 12:43 test1006a.txt
-rw-r--r-- 1 don don 29 Oct 6 12:43 test1006.txt
-rw-r--r-- 1 don don 29 Oct 6 12:47 test1007a.txt
-rw-r--r-- 1 don don 29 Oct 6 12:47 test1007.txt
-rw-r--r-- 1 1006 1006 29 Oct 6 12:43 test.txt
$
You can also change the ownership by logging in as root user into the container :
docker exec -it --user root <container-id> /bin/bash
chown -R <username>:<groupname> <folder/file>
In addition to #Don Kirkby's answer, let me provide a similar example in bash/shell script for the case that you want to copy something into a container while applying different ownership and permissions than those of the original file.
Let's create a new container from a small image that will keep running by itself:
docker run -d --name nginx nginx:alpine
Now wel'll create a new file which is owned by the current user and has default permissions:
touch foo.bar
ls -ahl foo.bar
>> -rw-rw-r-- 1 my-user my-group 0 Sep 21 16:45 foo.bar
Copying this file into the container will set ownership and group to the UID of my user and preserve the permissions:
docker cp foo.bar nginx:/foo.bar
docker exec nginx sh -c 'ls -ahl /foo.bar'
>> -rw-rw-r-- 1 4098 4098 0 Sep 21 14:45 /foo.bar
Using a little tar work-around, however, I can change the ownership and permissions that are applied inside of the container.
tar -cf - foo.bar --mode u=+r,g=-rwx,o=-rwx --owner root --group root | docker cp - nginx:/
docker exec nginx sh -c 'ls -ahl /foo.bar'
>> -r-------- 1 root root 0 Sep 21 14:45 /foo.bar
tar options explained:
c creates a new archive instead of unpacking one.
f - will write to stdout instead of a file.
foo.bar is the input file to be packed.
--mode specifies the permissions for the target. Similar to chown, they can be given in symbolic notation or as an octal number.
--owner sets the new owner of the file.
--group sets the new group of the file.
docker cp - reads the file that is to be copied into the container from stdin.
This approach is useful when a file needs to be copied into a created container before it starts, such that docker exec is not an option (which can only operate on running containers).
Just a one-liner (similar to #ramu's answer), using root to make the call:
docker exec -u 0 -it <container-id> chown node:node /home/node/myfile
In order to get complete control of file ownership, I used the tar stream feature of docker cp:
If - is specified for either the SRC_PATH or DEST_PATH, you can also stream a tar archive from STDIN or to STDOUT.
I launch the docker cp process, then stream a tar file to or from the process. As the tar entries go past, I can adjust the ownership and permissions however I like.
Here's a simple example in Python that copies all the files from /outputs in the sandbox1 container to the current directory, excludes the current directory so its permissions don't get changed, and forces all the files to have read/write permissions for the user.
from subprocess import Popen, PIPE, CalledProcessError
import tarfile
def main():
export_args = ['sudo', 'docker', 'cp', 'sandbox1:/outputs/.', '-']
exporter = Popen(export_args, stdout=PIPE)
tar_file = tarfile.open(fileobj=exporter.stdout, mode='r|')
tar_file.extractall('.', members=exclude_root(tar_file))
exporter.wait()
if exporter.returncode:
raise CalledProcessError(exporter.returncode, export_args)
def exclude_root(tarinfos):
print('\nOutputs:')
for tarinfo in tarinfos:
if tarinfo.name != '.':
assert tarinfo.name.startswith('./'), tarinfo.name
print(tarinfo.name[2:])
tarinfo.mode |= 0o600
yield tarinfo
main()

Docker Jboss/wildfly: How to add datasources and MySQL connector

I am learning Docker which is completely new to me. I already was able to create an jboss/wildfly image, and then i was able to start jboss with my application using this Dockerfile:
FROM jboss/wildfly
CMD ["/opt/jboss/wildfly/bin/standalone.sh", "-c", "standalone-full.xml", "-b", "0.0.0.0"]
ADD mywebapp-web/target/mywebapp-1.0.war /opt/jboss/wildfly/standalone/deployments/mywebapp-1.0.war
Now i would like to add support for a MySQL Database by adding a datasource to the standalone and the mysql connector. For that i am following this example:
https://github.com/arun-gupta/docker-images/tree/master/wildfly-mysql-javaee7
Following is my dockerfile and my execute.sh script
Dockerfile:
FROM jboss/wildfly:latest
ADD customization /opt/jboss/wildfly/customization/
CMD ["/opt/jboss/wildfly/customization/execute.sh"]
execute script code:
#!/bin/bash
# Usage: execute.sh [WildFly mode] [configuration file]
#
# The default mode is 'standalone' and default configuration is based on the
# mode. It can be 'standalone.xml' or 'domain.xml'.
echo "=> Executing Customization script"
JBOSS_HOME=/opt/jboss/wildfly
JBOSS_CLI=$JBOSS_HOME/bin/jboss-cli.sh
JBOSS_MODE=${1:-"standalone"}
JBOSS_CONFIG=${2:-"$JBOSS_MODE.xml"}
function wait_for_server() {
until `$JBOSS_CLI -c ":read-attribute(name=server-state)" 2> /dev/null | grep -q running`; do
sleep 1
done
}
echo "=> Starting WildFly server"
echo "JBOSS_HOME : " $JBOSS_HOME
echo "JBOSS_CLI : " $JBOSS_CLI
echo "JBOSS_MODE : " $JBOSS_MODE
echo "JBOSS_CONFIG: " $JBOSS_CONFIG
echo $JBOSS_HOME/bin/$JBOSS_MODE.sh -b 0.0.0.0 -c $JBOSS_CONFIG &
$JBOSS_HOME/bin/$JBOSS_MODE.sh -b 0.0.0.0 -c $JBOSS_CONFIG &
echo "=> Waiting for the server to boot"
wait_for_server
echo "=> Executing the commands"
$JBOSS_CLI -c --file=`dirname "$0"`/commands.cli
# Add MySQL module
module add --name=com.mysql --resources=/opt/jboss/wildfly/customization/mysql-connector-java-5.1.39-bin.jar --dependencies=javax.api,javax.transaction.api
# Add MySQL driver
/subsystem=datasources/jdbc-driver=mysql:add(driver-name=mysql,driver-module-name=com.mysql,driver-xa-datasource-class-name=com.mysql.jdbc.jdbc2.optional.MysqlXADataSource)
# Deploy the WAR
#cp /opt/jboss/wildfly/customization/leadservice-1.0.war $JBOSS_HOME/$JBOSS_MODE/deployments/leadservice-1.0.war
echo "=> Shutting down WildFly"
if [ "$JBOSS_MODE" = "standalone" ]; then
$JBOSS_CLI -c ":shutdown"
else
$JBOSS_CLI -c "/host=*:shutdown"
fi
echo "=> Restarting WildFly"
$JBOSS_HOME/bin/$JBOSS_MODE.sh -b 0.0.0.0 -c $JBOSS_CONFIG
But I get a error when i run the image complaining that a file or directory is not found:
Building Image
$ docker build -t mpssantos/leadservice:latest .
Sending build context to Docker daemon 19.37 MB
Step 1 : FROM jboss/wildfly:latest
---> b8279b641e82
Step 2 : ADD customization /opt/jboss/wildfly/customization/
---> aea03d4f2819
Removing intermediate container 0920e2cd97fd
Step 3 : CMD /opt/jboss/wildfly/customization/execute.sh
---> Running in 8a0dbcb01855
---> 10335320b89d
Removing intermediate container 8a0dbcb01855
Successfully built 10335320b89d
SECURITY WARNING: You are building a Docker image from Windows against a non-Windows Docker host. All files and directories added to build context will have '-rwxr-xr-x' permissions. It is recommended to double check and reset permissions for sensitive files and directories.
Running image
$ docker run mpssantos/leadservice
no such file or directory
Error response from daemon: Cannot start container 5d3357ba17afa36e81d8794f2b0cd45cc00dde955b2b2054282c4ef17dd4f265: [8] System error: no such file or directory
Can someone let me know how can i access the filesystem so i can check which file or directory is complaining? Is there a better way to debug this?
I believe that is something related with the bash which is referred on first line of the script because the following echo is not printed
Thank you so much
I made it to ssh the container to check whats inside.
1) ssh to the docker machine: docker-machine ssh default
2) checked the container id with the command: docker ps -a
3) ssh to the container with the command: sudo docker exec -i -t 665b4a1e17b6 /bin/bash
4) i can check that the "/opt/jboss/wildfly/customization/" directory exists with the expected files
The customization dir have the following permissions and is listed like this:
drwxr-xr-x 2 root root 4096 Jun 12 23:44 customization
drwxr-xr-x 10 jboss jboss 4096 Jun 14 00:15 standalone
and the files inside the customization dir
drwxr-xr-x 2 root root 4096 Jun 12 23:44 .
drwxr-xr-x 12 jboss jboss 4096 Jun 14 00:15 ..
-rwxr-xr-x 1 root root 1755 Jun 12 20:06 execute.sh
-rwxr-xr-x 1 root root 989497 May 4 11:11 mysql-connector-java-5.1.39-bin.jar
if i try to execute the file i get this error
[jboss#d68190e4f0d8 customization]$ ./execute.sh
bash: ./execute.sh: /bin/bash^M: bad interpreter: No such file or directory
Does this bring light to anything?
Thank you so much again
I found the issue. The execute.sh file was with windows eof. I converted to UNIX And start to work.
I believe the execute.sh is not found. You can verify by running the following and finding the result is an empty directory:
docker run mpssantos/leadservice ls -al /opt/jboss/wildfly/customization/
The reason for this is you are doing your build on a different (virtual) machine than your local system, so it's pulling the "customization" folder from that VM. I'd run the build within the VM and place the files you want to import on that VM where the build can find it.

Multi command with docker in a script

With docker I would like to offer a vm to each client to compile and execute a C program in only one file.
For that, I share a folder with the docker and the host thanks to a dockerfile and the command "ADD".
My folder is like that:
folder/id_user/script.sh
folder/id_user/code.c
In script.sh:
gcc ./compil/code.c -o ./compil/code && ./compil/code
My problem is in the doc we can read this for ADD:
All new files and directories are created with mode 0755, uid and gid 0.
But when I launch "ls" on the file I have:
ls -l compil/8f41dacd-8775-483e-8093-09a8712e82b1/
total 8
-rw-r--r-- 1 1000 1000 51 Feb 11 10:52 code.c
-rw-r--r-- 1 1000 1000 54 Feb 11 10:52 script.sh
So I can't execute the script.sh. Do you know why?
Maybe you wonder why proceed like that.
It's because if I do:
sudo docker run ubuntu/C pwd && pwd
result:
/
/srv/website
So we can see the first command is in the VM but not the second. I understand it might be normal for docker.
If you have any suggestion I'm pleased to listen it.
Thanks !
You can set up the correct mode by RUN command with chmod:
# Dockerfile
...
ADD script.sh /root/script.sh
RUN chmod +x /root/script.sh
...
The second question, you should use CMD command - && approach does work in Dockerfile, try to put this line at the end of your Dockerfile:
CMD pwd && pwd
then docker build . and you will see:
root#test:/home/test/# docker run <image>
/
/
Either that your you can do:
RUN /bin/sh /root/script.sh
to achieve the same result

Resources