How to run an sh script in docker file? - docker

When running a sh script in docker file, i got the following error:
./upload.sh: 5: ./upload.sh: sudo: not found ./upload.sh: 21:
./upload.sh: Bad substitution
sudo chmod 755 upload.sh # line 5
version=$(git rev-parse --short HEAD)
echo "version $version"
echo "Uploading file"
for path in $(find public/files -name "*.txt"); do
echo "path $path"
WORDTOREMOVE="public/"
echo "WORDTOREMOVE $WORDTOREMOVE"
# cause of the error
newpath=${path//$WORDTOREMOVE/} # Line 21
echo "new path $path"
url=http://localhost:3000/${newpath}
...
echo "Uploading file"
...
done
DockerFile
FROM node:10-slim
EXPOSE 3000 4001
WORKDIR /prod/code
...
COPY . .
RUN ./upload.sh
RUN npm run build
CMD ./DockerRun.sh
Any idea?

If anyone faces the same issue, here how I fixed it
chmod +x upload.sh
git update-index --chmod=+x upload.sh (mandatory if you pushed the file to remote branch before changing its permission)

The docker image you are using (node:10-slim) has no sudo installed on it because this docker image runs processes as user root:
docker run -it node:10-slim bash
root#68dcffceb88c:/# id
uid=0(root) gid=0(root) groups=0(root)
root#68dcffceb88c:/# which sudo
root#68dcffceb88c:/#
When your Dockerfile runs RUN ./upload.sh it will run:
sudo chmod 755 upload.sh
Using sudo inside the docker fails because sudo is not installed, there is no need to use sudo inside the docker because all of the commands inside the docker run as user root.
Simply remove the sudo from line number 5.
If you wish to update the running PATH variable run:
PATH=$PATH:/directorytoadd/bin
This will append the directory "/directorytoadd/bin" to the current path.

Related

using cryptsetup in gitlab-ci pipeline

I have created a docker container to be used in a gitlab-ci pipeline to build a java project. Properties files for an integration test are in an encrypted image I want to mount in the container during the execution of the integration test stage.
In my Dockerfile:
RUN apt install -y libncurses5-dev
RUN apt install -y cryptsetup
The docker image gets built and pushed without problems.
In my Gitlab CI File:
mkdir -p configs
echo -n $LUKSOPEN_PASS_PHRASE | /usr/sbin/cryptsetup luksOpen configs.img /home/java/config
But when the pipeline is run:
Executing "step_script" stage of the job script
00:01
$ -| echo 'integration tests' mkdir -p /home/java/config mkdir -p configs cryptsetup echo -n $LUKSOPEN_PASS_PHRASE | /usr/sbin/cryptsetup luksOpen configs.img /home/java/config mount /dev/mapper/configs configs cp configs/* /home/java/config ls /home/java/config gradle integrationTest
/usr/bin/bash: line 113: /usr/sbin/cryptsetup: No such file or directory
/usr/bin/bash: line 113: -: command not found
When I log into the container, cryptsetup is there:
sven#ixori:~/workspace/java-11-container$ docker run -it 9b733f66d757 /bin/bash
root#1193dc1f57b9:/# cryptsetup
Usage: cryptsetup [-?Vvyrq] [-?|--help] [--usage] [-V|--version] [-v|--verbose] [--debug]
[--debug-json] [-c|--cipher=STRING] [-h|--hash=STRING] [-y|--verify-passphrase]
What am I missing?

Docker, entrypoint strange behavior

I have the following Docker File (from ubuntu image):
...
WORKDIR ${DJANGO_BASE_DIR} //--> /opt/django
COPY --chown=${USERNAME}:${USERNAME} /deployment/entrypoint.sh ${DJANGO_BASE_DIR}
USER ${USERNAME}
RUN echo ${DJANGO_BASE_DIR} //--> /opt/django
CMD ["bash","entrypoint.sh"] // I also tried ENTRYPOINT
My entrypoint is the following:
...
echo "Waiting for postgres..."
while ! nc -z $DB_HOST $DB_PORT; do
sleep 0.1
done
cd
echo "PostgreSQL started"
echo $DJANGO_DEBUG
echo $(pwd) // display /home/django
....
I don't understand why my entrypoint is running on /home/django directory; I expect it to run on the WORKDIR, which is /opt/django
The cd command in your entrypoint script sets the working directory to the current user's home directory. Remove that and it should work fine.

How to prepend something to CLI in docker container?

I want to prepend something to the CLI passed in to a docker container.
I want it to run like this:
docker run -it mstools msbuild.exe --version
But, to make that work internally I need to prepend the full path the the msbuild.exe along with mono, like this:
mono /Microsoft.Build.Mono.Debug.14.1.0.0-prerelease/lib/msbuild.exe --version
When I use my below Dockerfile with the command, I get this:
$ docker run -it mstools msbuild.exe --version
msbuild.exe: 1: msbuild.exe: [/usr/bin/mono,: not found
If I jump into the container and check the path:
$ docker run -it --entrypoint=bash mstools
root#eb47008f092e:/# which mono
/usr/bin/mono
What am I missing??
Dockerfile:
FROM centeredge/nuget
ARG VERSION="14.1.0.0-prerelease"
RUN nuget install Microsoft.Build.Mono.Debug -Version $VERSION -Source "https://www.myget.org/F/dotnet-buildtools/"
ENV PATH="/Microsoft.Build.Mono.Debug.$VERSION/lib/:${PATH}"
ENTRYPOINT ['/usr/bin/mono', " /Microsoft.Build.Mono.Debug.$VERSION/lib/$1 $#"]
The error you get certainly comes from the fact you use single quotes ' instead of double quotes " in the ENTRYPOINT exec form.
In addition, I don't think the "$#" phrasing you mention will work (because "$#" needs some shell to evaluate it, while in the exec form there is no /bin/sh -c … implied). But the exec form of ENTRYPOINT is definitely the way to go.
So I'd suggest you write something like this:
FROM centeredge/nuget
ARG VERSION="14.1.0.0-prerelease"
RUN nuget install Microsoft.Build.Mono.Debug -Version $VERSION -Source "https://www.myget.org/F/dotnet-buildtools/"
ENV PATH="/Microsoft.Build.Mono.Debug.$VERSION/lib/:${PATH}"
COPY entrypoint.sh /usr/src/
RUN chmod a+x /usr/src/entrypoint.sh
ENTRYPOINT ["/usr/src/entrypoint.sh"]
with entrypoint.sh containing:
#!/bin/bash
exec /usr/bin/mono "/Microsoft.Build.Mono.Debug.$VERSION/lib/$1" "$#"
(Note: I didn't test this example code for now so please comment if you find some typo)
Final working solution based on #ErikMD's answer:
FROM centeredge/nuget
ARG VERSION="14.1.0.0-prerelease"
RUN nuget install Microsoft.Build.Mono.Debug -Version $VERSION -Source "https://www.myget.org/F/dotnet-buildtools/"
ENV PATH="/Microsoft.Build.Mono.Debug.$VERSION/lib/:/Microsoft.Build.Mono.Debug.$VERSION/lib/tools/:${PATH}"
RUN echo '#!/bin/bash' > /usr/src/entrypoint.sh && echo 'exec /usr/bin/mono "$(which "$1")" "$#"' >> /usr/src/entrypoint.sh && chmod a+x /usr/src/entrypoint.sh
ENTRYPOINT ["/usr/src/entrypoint.sh"]
output
docker run -it mstools MSBuild.exe -version
Microsoft (R) Build Engine version 14.1.0.0
Copyright (C) Microsoft Corporation. All rights reserved.
14.1.0.0

How to workaround "the input device is not a TTY" when using grunt-shell to invoke a script that calls docker run?

When issuing grunt shell:test, I'm getting warning "the input device is not a TTY" & don't want to have to use -f:
$ grunt shell:test
Running "shell:test" (shell) task
the input device is not a TTY
Warning: Command failed: /bin/sh -c ./run.sh npm test
the input device is not a TTY
Use --force to continue.
Aborted due to warnings.
Here's the Gruntfile.js command:
shell: {
test: {
command: './run.sh npm test'
}
Here's run.sh:
#!/bin/sh
# should use the latest available image to validate, but not LATEST
if [ -f .env ]; then
RUN_ENV_FILE='--env-file .env'
fi
docker run $RUN_ENV_FILE -it --rm --user node -v "$PWD":/app -w /app yaktor/node:0.39.0 $#
Here's the relevant package.json scripts with command test:
"scripts": {
"test": "mocha --color=true -R spec test/*.test.js && npm run lint"
}
How can I get grunt to make docker happy with a TTY? Executing ./run.sh npm test outside of grunt works fine:
$ ./run.sh npm test
> yaktor#0.59.2-pre.0 test /app
> mocha --color=true -R spec test/*.test.js && npm run lint
[snip]
105 passing (3s)
> yaktor#0.59.2-pre.0 lint /app
> standard --verbose
Remove the -t from the docker run command:
docker run $RUN_ENV_FILE -i --rm --user node -v "$PWD":/app -w /app yaktor/node:0.39.0 $#
The -t tells docker to configure the tty, which won't work if you don't have a tty and try to attach to the container (default when you don't do a -d).
This solved an annoying issue for me. The script had these lines:
docker exec **-it** $( docker ps | grep mysql | cut -d' ' -f1) mysql --user= ..... > /var/tmp/temp.file
mutt -s "File is here" someone#somewhere.com < /var/tmp/temp.file
The script would run great if run directly and the mail would come with the correct output. However, when run from cron, (crontab -e) the mail would come with no content. Tried many things around permissions and shells and paths etc. However no joy!
Finally found this:
*/20 * * * * scriptblah.sh > $HOME/cron.log 2>&1
And on that cron.log file found this output:
the input device is not a TTY
Search led me here. And after I removed the -t, it's working great now!
docker exec **-i** $( docker ps | grep mysql | cut -d' ' -f1) mysql --user= ..... > /var/tmp/temp.file

How to get /etc/profile to run automatically in Alpine / Docker

How can I get /etc/profile to run automatically when starting an Alpine Docker container interactively? I have added some aliases to an aliases.sh file and placed it in /etc/profile.d, but when I start the container using docker run -it [my_container] sh, my aliases aren't active. I have to manually type . /etc/profile from the command line each time.
Is there some other configuration necessary to get /etc/profile to run at login? I've also had problems with using a ~/.profile file. Any insight is appreciated!
EDIT:
Based on VonC's answer, I pulled and ran his example ruby container. Here is what I got:
$ docker run --rm --name ruby -it codeclimate/alpine-ruby:b42
/ # more /etc/profile.d/rubygems.sh
export PATH=$PATH:/usr/lib/ruby/gems/2.0.0/bin
/ # env
no_proxy=*.local, 169.254/16
HOSTNAME=6c7e93ebc5a1
SHLVL=1
HOME=/root
TERM=xterm
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PWD=/
/ # exit
Although the /etc/profile.d/rubygems.sh file exists, it is not being run when I login and my PATH environment variable is not being updated. Am I using the wrong docker run command? Is something else missing? Has anyone gotten ~/.profile or /etc/profile.d/ files to work with Alpine on Docker? Thanks!
The default shell in Alpine Linux is ash.
Ash will only read the /etc/profile and ~/.profile files if it is started as a login shell sh -l.
To force Ash to source the /etc/profile or any other script you want upon its invocation as a non login shell, you need to setup an environment variable called ENV before launching Ash.
e.g. in your Dockerfile
FROM alpine:3.5
ENV ENV="/root/.ashrc"
RUN echo "echo 'Hello, world!'" > "$ENV"
When you build that you get:
deployer#ubuntu-1604-amd64:~/blah$ docker build --tag test .
Sending build context to Docker daemon 2.048kB
Step 1/3 : FROM alpine:3.5
3.5: Pulling from library/alpine
627beaf3eaaf: Pull complete
Digest: sha256:58e1a1bb75db1b5a24a462dd5e2915277ea06438c3f105138f97eb53149673c4
Status: Downloaded newer image for alpine:3.5
---> 4a415e366388
Step 2/3 : ENV ENV "/root/.ashrc"
---> Running in a9b6ff7303c2
---> 8d4af0b7839d
Removing intermediate container a9b6ff7303c2
Step 3/3 : RUN echo "echo 'Hello, world!'" > "$ENV"
---> Running in 57c2fd3353f3
---> 2cee6e034546
Removing intermediate container 57c2fd3353f3
Successfully built 2cee6e034546
Finally, when you run the newly generated container, you get:
deployer#ubuntu-1604-amd64:~/blah$ docker run -ti test /bin/sh
Hello, world!
/ # exit
Notice the Ash shell didn't run as a login shell.
So to answer your query, replace
ENV ENV="/root/.ashrc"
with:
ENV ENV="/etc/profile"
and Alpine Linux's Ash shell will automatically source the /etc/profile script each time the shell is launched.
Gotcha: /etc/profile is normally meant to only be sourced once! So, I would advise that you don't source it and instead source a /root/.somercfile instead.
Source: https://stackoverflow.com/a/40538356
You still can try in your Dockerfile a:
RUN echo '\
. /etc/profile ; \
' >> /root/.profile
(assuming the current user is root. If not, replace /root with the full home path)
That being said, those /etc/profile.d/xx.sh should run.
See codeclimate/docker-alpine-ruby as an example:
COPY files /
With 'files/etc" including an files/etc/profile.d/rubygems.sh running just fine.
In the OP project Dockerfile, there is a
COPY aliases.sh /etc/profile.d/
But the default shell is not a login shell (sh -l), which means profile files (or those in /etc/profile.d) are not sourced.
Adding sh -l would work:
docker#default:~$ docker run --rm --name ruby -it codeclimate/alpine-ruby:b42 sh -l
87a58e26b744:/# echo $PATH
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/lib/ruby/gems/2.0.0/bin
As mentioned by Jinesh before, the default shell in Alpine Linux is ash
localhost:~$ echo $SHELL
/bin/ash
localhost:~$
Therefore simple solution is too add your aliases in .profile. In this case, I put all my aliases in ~/.ash_aliases
localhost:~$ cat .profile
# ~/.profile
# Alias
if [ -f ~/.ash_aliases ]; then
. ~/.ash_aliases
fi
localhost:~$
.ash_aliases file
localhost:~$ cat .ash_aliases
alias a=alias
alias c=clear
alias f=file
alias g=grep
alias l='ls -lh'
localhost:~$
And it works :)
I use this:
docker exec -it my_container /bin/ash '-l'
The -l flag passed to ash will make it behave as a login shell, thus reading ~/.profile

Resources