Dockerfile can't run shell scripts inside Jenkins - docker

I'm trying to run a shell script file inside a docker container. It's working as expected when testing it with the command line but not when running it inside Jenkins.
This is my Dockerfile:
FROM ubuntu:latest
RUN apt-get update \
&& apt-get dist-upgrade -y \
&& apt-get install -y \
build-essential libgtest-dev \
&& apt-get clean
WORKDIR /src
COPY . .
#working when using CLI and Jenkinsfile
RUN echo "test"
#working when using CLI and Jenkinsfile
RUN ls
#working when using CLI but NOT with Jenkinsfile
RUN ./build.sh
CMD ["./run_tests.sh"]
And this is how I run the dockerfile:
docker-build.sh
#!/bin/sh
set -x
#stop and rm old container if any
docker container stop build-fw
docker container rm build-fw
docker build -t build-fw .
docker run --name build-fw -d -it build-fw
docker logs build-fw
docker container stop build-fw
docker container rm build-fw
In Jenkins, I simply created a Freestyle project with the "Execute Shell" build step, where I simply ran the "docker-build.sh" script:
Execute Shell
./docker-build.sh
I got the following error output:
#12 0.586 /bin/sh: 1: ./build.sh: not found
#12 ERROR: executor failed running [/bin/sh -c ./build.sh]: exit code: 127
------
> [8/8] RUN build.sh:
------
executor failed running [/bin/sh -c ./build.sh]: exit code: 127
The error states, that ./build.sh was not found, although RUN ls shows that build.sh exists.
Why is it the case?

build.sh was not found because it had an invalid character in the shell. I created the file on windows (which has different line endings than linux), that is why I'm getting the error message. So I just made sure the line endings are correct. Linux compatible line endings. LF and not CR nor CRLF. In other words: \n and not \r nor \r\n.
Here is an example to reproduce the issue in a container:
docker run --rm -it bash
bash-5.1# printf '#!/bin/sh\r\n' > /build.sh
bash-5.1# ./build.sh
bash: ./build.sh: /bin/sh^M: bad interpreter: No such file or directory
bash-5.1# /bin/sh -c ./build.sh
/bin/sh: ./build.sh: not found
So to fix this, I ran this command sed -i 's/\r//' build.sh which removes \r from build.sh if present.

Related

docker exit after executing the command?

I need to compile gem5 with the environment inside docker. This is not frequent, and once the compilation is done, I don't need the docker environment anymore.
I have a docker image named gerrie/gem5. I want to perform the following process.
Use this image to create a container, mount the local gem5 source code, compile and generate an executable file(Executables are by default in the build directory.), exit the container and delete it. And I want to be able to see the compilation process so that if the code goes wrong, I can fix it.
But I ran into some problems.
docker run -it --rm -v ${HOST_GEM5}:${DOCKER_GEM5} gerrie/gem5 bash -c "scons build/X86/gem5.opt"
When I execute the above command, I will go to the docker terminal. Then the command to compile gem5(scons build/X86/gem5.opt) is not executed. I think it might be because of the -it option. When I remove this option, I don't see any output anymore.
I replaced the command with the following sentence.
docker run -it --rm -v ${HOST_GEM5}:${DOCKER_GEM5} gerrie/gem5 bash -c "echo 'hello'"
But I still don't see any output.
When I went into the docker container and tried to compile it myself, the build directory was generated. I found that outside docker, I can't delete it.
What should I do? Thanks!
dockerfile
FROM matthewfeickert/docker-python3-ubuntu:latest
LABEL maintainer="Yujie YujieCui#pku.edu.cn"
USER root
# get dependencies
RUN set -x; \
sudo apt-get update \
&& DEBIAN_FRONTEND=noninteractive sudo apt-get install -y build-essential git-core m4 zlib1g zlib1g-dev libprotobuf-dev protobuf-compiler libprotoc-dev libgoogle-perftools-dev swig \
&& sudo -H python -m pip install scons==3.0.1 \
&& sudo -H python -m pip install six
RUN apt-get clean
# checkout repo with mercurial
# WORKDIR /usr/local/src
# RUN git clone https://github.com/gem5/gem5.git
# build it
WORKDIR /usr/local/src/gem5
ENTRYPOINT bash
I found that when downloading gem5, it may be because gem5 is too big, and it keeps showing "fatal: unable to access 'https://github.com/gem5/gem5.git/': GnuTLS recv error (-110): The TLS connection was non-properly terminated." mistake
So I commented out the
RUN git clone https://github.com/gem5/gem5.git command
You could make the entrypoint scons itself.
ENTRYPOINT ["scons"]
Or absolute path to the bin. I don't know where it will be installed to, you need to check.
ENTRYPOINT ["/usr/local/bin/scons"]
Then you can run
docker run -it --rm -v ${HOST_GEM5}:${DOCKER_GEM5} gerrie/gem5 build/X86/gem5.opt
If the sole purpose of the image is to invoke scons, it would be kind of idiomatic.
Otherwise, remove the entrypoint. Also note, you don't need to wrap it in bash -c
If you have removed the entrypoint you can run it like this.
docker run -it --rm -v ${HOST_GEM5}:${DOCKER_GEM5} gerrie/gem5 scons build/X86/gem5.opt

Docker : starting container process caused "exec: \"-n\": executable file not found in $PATH": unknown

While launching a command on my docker image (run), I get the following error :
C:\Program Files\Docker\Docker\resources\bin\docker.exe: Error response from daemon: OCI runtime create failed: container_linux.go:349: starting container process caused "exec: \"-n\": executable file not found in $PATH": unknown.
The image is an image for Jmeter, that I have created myself :
FROM hauptmedia/java:oracle-java8
MAINTAINER maisie
ENV JMETER_VERSION 5.2.1
ENV JMETER_HOME /opt/jmeter
ENV JMETER_DOWNLOAD_URL https://archive.apache.org/dist/jmeter/binaries/apache-jmeter-${JMETER_VERSION}.tgz
RUN apt-get clean
RUN apt-get update
RUN apt-get -y install ca-certificates
RUN mkdir -p ${JMETER_HOME}
RUN cd ${JMETER_HOME}
RUN wget https://archive.apache.org/dist/jmeter/binaries/apache-jmeter-5.2.1.tgz
RUN tar -xvzf apache-jmeter-5.2.1.tgz
RUN rm apache-jmeter-5.2.1.tgz
The command that I am launching is :
#!/bin/bash
export volume_path=$(pwd)
export jmeter_path="/opt/apache-jmeter-5.2.1/bin"
docker run --volume ${volume_path}:${jmeter_path} my/jmeter -n -t ${jmeter_path}/TEST.jmx -l ${jmeter_path}/res.jtl
I really can't find any answer to my problem ...
Thank you in advance for any help.
The general form of the docker run command is
docker run [docker options] <image name> [command]
So you are running an image named amos/jmeter, and the command you are having it run is -n -t .... You're getting the error you are because you've only given a list of options and not an actual command.
The first part of this is to include the actual command in your docker run line:
docker run --rm amos/jmeter \
jmeter -n ...
There's also going to be a problem with how you install the software in the Dockerfile. (You do not need a docker run --volume to supply software that's already in the image.) Each RUN command starts in a new shell in a new environment (in a new container even), so saying e.g. RUN cd ... in its own line doesn't do anything. You need to use Dockerfile directives like WORKDIR and ENV to change the environment. The jmeter command isn't in a standard binary directory so you'll also have a little trouble running it. I might change:
# ...
# Run all APT commands in a single command
# (Layer caching can break an install if the list of packages changes)
RUN apt-get clean \
&& apt-get update \
&& apt-get -y install ca-certificates
# Download and unpack the JMeter tar file
# This is all in a single RUN command, so
# (1) the `cd` at the effect has (temporary) effect, and
# (2) the tar file isn't committed to an image before you `rm` it
RUN cd /opt \
&& wget ${JMETER_DOWNLOAD_URL} \
&& tar xzf apache-jmeter-${JMETER_VERSION}.tgz \
&& rm apache-jmeter-${JMETER_VERSION}.tgz
# Create a symlink to the jmeter process in a normal bin directory
RUN ln -s /opt/apache-jmeter-${JMETER_VERSION}/bin/jmeter /usr/local/bin
# Indicate the default command to run
CMD jmeter
Finally, there will be questions around where to store data files. It's better to store data outside the application directory; in a Docker context it's common enough to use short (if non-standard) directory paths like /data. Remember that any file path in a docker run command refers to a path in the container, but you need a docker run -v bind-mount option (your original --volume is equivalent) to make it visible on the host. That would give you a final command like:
docker run -v "$PWD:/data" atos/jmeter \
jmeter -n -t /data/TEST.jmx -l /data/res.jtl

docker create | Error response from daemon: No command specified

Attached there is my Dockerfile. My intention is to use the following command:
docker build -t fbprophet . && \
docker create --name=awslambda fbprophet && \
docker cp awslambda:/var/task/venv/lib/python3.7/site-packages/lambdatest.zip . \
docker rm awslambda
However, I always receive this error here:
Error response from daemon: No command specified
When running these commands here, it works. I have to run it in different shells so the container doesn't stop running before my export is done.
docker build -t fbprophet . && docker container rm awslambda && docker run -it --name=awslambda fbprophet bash
docker cp awslambda:/var/task/venv/lib/python3.7/site-packages/lambdatest.zip .
Dockerfile:
FROM lambci/lambda:build-python3.7
ENV VIRTUAL_ENV=/venv
RUN python3 -m venv $VIRTUAL_ENV
ENV PATH="$VIRTUAL_ENV/bin:$PATH"
COPY requirements.txt .
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
WORKDIR /var/task/venv/lib/python3.7/site-packages
COPY lambda_function.py .
COPY .lambdaignore .
RUN echo "Package size: $(du -sh | cut -f1)"
RUN zip -9qr lambdatest.zip *
RUN cat .lambdaignore | xargs zip -9qr /var/task/lambdatest.zip * -x
Probably the easiest way to get files out of an image you've built is to mount a volume on to a container, and make the main container process just be a cp command:
docker run \
--rm \
-v $PWD:/export \
fbprophet \
cp lambdatest.zip /export
(If you've built an application that uses ENTRYPOINT ["python"] or some such, you need to specify --entrypoint /bin/cp before the image name, and then put the arguments after the image name. Using CMD instead avoids this complication.)
Usually a Docker image has a packaged application (or a reasonable base one could build an application on), and running a container actually runs that application. An image is kind of an inconvenient way to just pass around files. You might find it easier and safer to run the same set of commands outside of Docker on your host to create a virtual environment, and you can just directly cp the file out of there when you're done.

why am i not able to run docker command inside a shell script

I tried with #!/usr/bin/bash and #!/usr/bin/env/sh and #!/usr/bin/env/ bash
docker run -t -i image-id /bin/sh is what i use and use ls and i see that the file is there
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
9c970708c1a3 bd24c6bef682 "/bin/sh -c 'sh benc…" 7 minutes ago Exited (127) 7 minutes ago amazing_lovelace
I am able to run both these commands individually and they work
echo "Getting the Container Name"
containerName=$(docker ps -l --format '{{.Names}}' 2>&1)
echo "Transfering the text file"
docker cp $containerName:/benchmark.txt /Users/xxxx/Devlocal/xxxxx/tests/logs/benchmark.txt
when put in a shell script and run i get: ERROR
benchmark.sh: line 13: docker: not found
The command '/bin/sh -c sh benchmark.sh' returned a non-zero code: 127
please help
dockerfile
FROM gliderlabs/alpine:3.5
MAINTAINER LN
RUN apk add --update alpine-sdk openssl-dev
RUN apk add --no-cache git
RUN git clone https://github.com/giltene/wrk2.git && cd wrk2 && make
ADD ["benchmark.sh", "/benchmark.sh"]
RUN sh benchmark.sh
The script that you call inside the container, executes docker commands. This can only work if you have docker installed in your container, which, by the looks of it, it is not.
If your only aim is to have access to the file /benchmarks.txt, you can mount it when you start your container:
docker run -d -v /Users/xxxx/Devlocal/xxxxx/tests/logs/benchmark.txt:/benchmark.txt ...
Note that you have to add RUN touch /benchmark.txt to your Dockerfile.
The reason for that is here

How to write docker file to run a docker run command inside an image

I have a shell script which creates and executes docker containers using docker run command. I want to keep this script in a docker image and want to run this shell script. I know that we cannot run docker inside a container. Is it possible to create a docker file to achieve this?
Dockerfile:
FROM ubuntu:latest
RUN apt-get update && apt-get install -y vim-gnome curl
RUN curl -L https://raw.githubusercontent.com/xyz/abx/test/testing/testing_docker.sh -o testing_docker.sh
RUN chmod +x testing_docker.sh
CMD ["./testing_docker.sh"]
testing_docker.sh:
docker run -it docker info (sample command)

Resources