How to run requirments.txt through jenkins using docker? - docker

Hi i am new to Jenkins i have repository i clone it first , then i need to build the image using Dockerfile it also include the requirments.txt. When i run requirments.txt file from my build.sh file it works fine but when i run it through jenkins using pipeline like this :
pipeline {
agent any
stages {
stage('Build') {
steps {
sh 'sudo bash /home/salman/mlops/build.sh'
}
}
}
}
it throw this error:
Step 1/6 : FROM ubuntu:latest
---> 9873176a8ff5
Step 2/6 : RUN apt-get update && apt-get install -y python3-pip python3-dev && cd /usr/local/bin && ln -s /usr/bin/python3 python && pip3 install --upgrade pip
---> Using cache
---> e5f345c76347
Step 3/6 : WORKDIR mlcode
---> Using cache
---> 273bf7d60f31
Step 4/6 : COPY requirments.txt .
COPY failed: file not found in build context or excluded by .dockerignore: stat requirments.txt: file does not exist
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: script returned exit code 1
Finished: FAILURE
even i check the path of requirments.txt file its correct but it didn't run from jenkins. This is my Dockerfile:
FROM ubuntu:latest
RUN apt-get update \
&& apt-get install -y python3-pip python3-dev \
&& cd /usr/local/bin \
&& ln -s /usr/bin/python3 python \
&& pip3 install --upgrade pip
WORKDIR mlcode
COPY requirments.txt .
RUN pip install --no-cache-dir -r requirments.txt
CMD ["python3", "/mlcode/code.py"]
and this is my build.sh:
if cat /home/salman/mlops/Mlops/mlcode/code.py | grep sklearn ; then
sudo docker build -t mlcontainer -f /home/salman/mlops/Dockerfile .
elif cat /home/salman/mlops/Mlops/mlcode/code.py | grep keras ; then
sudo docker build -t dlcontainer -f /home/salman/mlops/Dockerfile .
else
echo "Code not found";
fi

Related

Docker: Error code 127 when executing shell script

So I can't seem to figure this out, but I'm getting error code 127 when running a Dockerfile. What causes this error?
My Dockerfile:
FROM composer as comp
FROM php:7.4-fpm-alpine
COPY --from=comp /usr/bin/composer /usr/bin/composer
COPY ./docker/install-deps.sh /tmp/install-deps.sh
RUN echo $(ls /tmp)
RUN /tmp/install-deps.sh
COPY . /var/www
WORKDIR /var/www
RUN composer install -o --no-dev
The results after building the Dockerfile:
Building php
Step 1/9 : FROM composer as comp
---> 433420023b60
Step 2/9 : FROM php:7.4-fpm-alpine
---> 78e945602ecc
Step 3/9 : COPY --from=comp /usr/bin/composer /usr/bin/composer
---> 46117e22b4de
Step 4/9 : COPY ./docker/install-deps.sh /tmp/install-deps.sh
---> 7e46a2ee759c
Step 5/9 : RUN echo $(ls /tmp)
---> Running in aa1f900032f9
install-deps.sh
Removing intermediate container aa1f900032f9
---> eb455e78b7f6
Step 6/9 : RUN /tmp/install-deps.sh
---> Running in 6402a15cccb2
/bin/sh: /tmp/install-deps.sh: not found
ERROR: Service 'php' failed to build: The command '/bin/sh -c /tmp/install-deps.sh' returned a non-zero code: 127
The install-deps.sh:
#!/bin/sh
set -e
apk add --update --no-cache \
postgresql-dev \
mysql-client \
yaml-dev \
git \
openssl
docker-php-ext-install pcntl pdo_mysql pdo_pgsql
# yaml
apk add --no-cache --virtual .build-deps g++ make autoconf
pecl channel-update pecl.php.net
pecl install yaml
docker-php-ext-enable yaml
apk del --purge .build-deps
Docker is executing the install-deps.sh script. The issue is with a command inside install-deps.sh that is not recognized when docker attempts to run the script.
As you can see the script returns an error code of 127 meaning that a command within the file does not exist.
For instance - try this:
touch test.sh
echo "not-a-command" >> test.sh
chmod 755 test.sh
/bin/sh -c "./test.sh"
Output:
/root/test.sh: line 1: not-a-command: command not found
Now check the exit code:
echo $?
127
I would suggest running the script inside an interactive environment to identify/install the command that is not found.

How to optimise the Docker build process in JenkinsPipeline

I am having trouble in optimizing my docker build step.
Below is my use case:
In my jenkinsfile I am building 3 docker images.
from *docker/test/Dockerfile*
from *docker/dev/Dockerfile*
stage('Build') {
steps {
sh 'docker build -t Test -f docker/test/Dockerfile .'
sh 'set +x && eval $(/usr/local/bin/aws-login/aws-login.sh $AWS_ACCOUNT jenkins eu-west-2) \
&& docker build -t DEV --build-arg S3_FILE_NAME=environment.dev.ts \
--build-arg CONFIG_S3_BUCKET_URI=s3://bucket \
--build-arg AWS_SESSION_TOKEN=$AWS_SESSION_TOKEN \
--build-arg AWS_DEFAULT_REGION=$AWS_DEFAULT_REGION \
--build-arg AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY \
--build-arg AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID \
-f docker/dev/Dockerfile .'
sh 'set +x && eval $(/usr/local/bin/aws-login/aws-login.sh $AWS_ACCOUNT jenkins eu-west-2) \
&& docker build -t QA --build-arg S3_FILE_NAME=environment.qa.ts \
--build-arg CONFIG_S3_BUCKET_URI=s3://bucket \
--build-arg AWS_SESSION_TOKEN=$AWS_SESSION_TOKEN \
--build-arg AWS_DEFAULT_REGION=$AWS_DEFAULT_REGION \
--build-arg AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY \
--build-arg AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID \
-f docker/dev/Dockerfile .'
}
}
stage('Test') {
steps {
sh 'docker run --rm TEST npm run test'
}
}
Below is my two docker file:
docker/test/Dockerfile:
FROM node:lts
RUN mkdir /usr/src/app
WORKDIR /usr/src/app
ENV PATH /usr/src/app/node_modules/.bin:$PATH
RUN wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add -
RUN sh -c 'echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google.list'
RUN apt-key update && apt-get update && apt-get install -y google-chrome-stable
COPY . /usr/src/app
RUN npm install
CMD sh ./docker/test/docker-entrypoint.sh
docker/dev/Dockerfile:
FROM node:lts as dev-builder
ARG CONFIG_S3_BUCKET_URI
ARG S3_FILE_NAME
ARG AWS_SESSION_TOKEN
ARG AWS_DEFAULT_REGION
ARG AWS_SECRET_ACCESS_KEY
ARG AWS_ACCESS_KEY_ID
RUN apt-get update
RUN apt-get install python3-dev -y
RUN curl -O https://bootstrap.pypa.io/get-pip.py
RUN python3 get-pip.py
RUN pip3 install awscli --upgrade
RUN mkdir /app
WORKDIR /app
COPY . .
RUN aws s3 cp "$CONFIG_S3_BUCKET_URI/$S3_FILE_NAME" src/environments/environment.dev.ts
RUN cat src/environments/environment.dev.ts
RUN npm install
RUN npm run build-dev
FROM nginx:stable
COPY nginx.conf /etc/nginx/nginx.conf
COPY --from=dev-builder /app/dist/ /usr/share/nginx/html/
Every time it takes 20-25 mins to build the images.
Is there any way I can optimize the docker file for a better build process?
suggestion are welcome. RUN npm run build-dev uses package.json to install the dependencies. which is one on the reason that it install all dependency for every build.
Thanks
You can use combination of base images and multi stage builds to speed up your builds.
Base image with pre-installed packages/dependencies
Stuff like installing python3, pip, google-chrome, awscli etc need not be done every build. These layers might get cached if you are building on single machine but if you have multiple build machines or clean the cache, you will be re-building these layers unnecessarily. You can build a base image which already has these stuff and use this new image as the base for your app.
Multi stage builds
You are copying your source code and then doing npm install. Even if package.json has not changed, the layer will be re-built if any other file in source code might have changed.
You can create a multi stage dockerfile where you just copy the package.json in the first stage and run npm install and other such commands. This layer will be re-built only if package.json is changed.
In your second stage, you can just copy the npm cache from first stage.
FROM node:lts as dev-builder
WORKDIR /cache/
COPY package.json .
RUN npm install
RUN npm run build-dev
FROM NEW_BASE_IMAGE_WITH_CHROME_ETC_DEPENDENCIES
COPY --from=node_cache /cache/ .
COPY . .
<snip>
Identify any other such optimisations that you can make.

Docker build command fails on COPY

Hi I have a docker file which is failing on the COPY command. It was running fine initially but then it suddenly crashed during the build process. The Docker file basically sets up the dev environment and authenticate with GCP.
FROM ubuntu:16.04
## ENV Variables
ENV PYTHON_VERSION="3.6.5"
ENV BUCKET_NAME='detection-sandbox'
ENV DIRECTORY='/usr/local/gcloud'
# Update and Install packages
RUN apt-get update -y \
&& apt-get install -y \
curl \
wget \
tar \
xz-utils \
bc \
build-essential \
cmake \
curl \
zlib1g-dev \
libssl-dev \
libsqlite3-dev \
python3-pip \
python3-setuptools \
unzip \
g++ \
git \
python-tk
# Install Python 3.6.5
RUN wget https://www.python.org/ftp/python/${PYTHON_VERSION}/Python-${PYTHON_VERSION}.tar.xz \
&& tar -xvf Python-${PYTHON_VERSION}.tar.xz \
&& rm -rf Python-${PYTHON_VERSION}.tar.xz \
&& cd Python-${PYTHON_VERSION} \
&& ./configure \
&& make install \
&& cd / \
&& rm -rf Python-${PYTHON_VERSION}
# Install pip
RUN curl -O https://bootstrap.pypa.io/get-pip.py \
&& python3 get-pip.py \
&& rm get-pip.py
# Add SNI support to Python
RUN pip --no-cache-dir install \
pyopenssl \
ndg-httpsclient \
pyasn1
## Download and Install Google Cloud SDK
RUN mkdir -p /usr/local/gcloud \
&& curl https://sdk.cloud.google.com > install.sh \
&& bash install.sh --disable-prompts --install-dir=${DIRECTORY}
# Adding the package path to directory
ENV PATH $PATH:${DIRECTORY}/google-cloud-sdk/bin
# working directory
WORKDIR /usr/src/app
COPY requirements.txt ./ \
testproject-264512-9de8b1b35153.json ./
It fails at this step :
Step 13/21 : COPY requirements.txt ./ testproject-264512-9de8b1b35153.json ./
COPY failed: stat /var/lib/docker/tmp/docker-builder942576416/testproject-264512-9de8b1b35153.json: no such file or directory
Any leads in this would be helpful.
How are you running docker build command?
In docker best practices I've read that docker fails if you try to build your image from stdin using -
Attempting to build a Dockerfile that uses COPY or ADD will fail if this syntax is used. The following example illustrates this:
# create a directory to work in
mkdir example
cd example
# create an example file
touch somefile.txt
docker build -t myimage:latest -<<EOF
FROM busybox
COPY somefile.txt .
RUN cat /somefile.txt
EOF
# observe that the build fails
...
Step 2/3 : COPY somefile.txt .
COPY failed: stat /var/lib/docker/tmp/docker-builder249218248/somefile.txt: no such file or directory
I've reproduced issue... Here is my Dockerfile:
FROM alpine:3.7
## ENV Variables
ENV PYTHON_VERSION="3.6.5"
ENV BUCKET_NAME='detection-sandbox'
ENV DIRECTORY='/usr/local/gcloud'
# working directory
WORKDIR /usr/src/app
COPY kk.txt ./ \
kk.2.txt ./
If I create image by running docker build -t testImage:1 [DOCKERFILE_FOLDER], docker creates image and works correctly.
However if I try the same command from stdin as:
docker build -t test:2 - <<EOF
FROM alpine:3.7
ENV PYTHON_VERSION="3.6.5"
ENV BUCKET_NAME='detection-sandbox'
ENV DIRECTORY='/usr/local/gcloud'
WORKDIR /usr/src/app
COPY kk.txt ./ kk.2.txt ./
EOF
I get the following error:
Step 1/6 : FROM alpine:3.7
---> 6d1ef012b567
Step 2/6 : ENV PYTHON_VERSION="3.6.5"
---> Using cache
---> 734d2a106144
Step 3/6 : ENV BUCKET_NAME='detection-sandbox'
---> Using cache
---> 18fba29fffdc
Step 4/6 : ENV DIRECTORY='/usr/local/gcloud'
---> Using cache
---> d926a3b4bc85
Step 5/6 : WORKDIR /usr/src/app
---> Using cache
---> 57a1868f5f27
Step 6/6 : COPY kk.txt ./ kk.2.txt ./
COPY failed: stat /var/lib/docker/tmp/docker-builder518467298/kk.txt: no such file or directory
It seems that docker build images from /var/lib/docker/tmp/ if you build image from stdin, thus ADD or COPY commands don't work.
Incorrect path in source is a common error.
Use
COPY ./directory/testproject-264512-9de8b1b35153.json /dir/
instead of
COPY testproject-264512-9de8b1b35153.json /dir/

Dockerfile is not running the desired shell script

I am trying configure and run a certain program using Docker. I am a beginner in Docker, so beware of newbie mistakes!
FROM ubuntu:16.04
# create non-root user
ENV USERNAME ros
RUN adduser --ingroup sudo --disabled-password --gecos "" --shell /bin/bash --home /home/$USERNAME $USERNAME
RUN bash -c 'echo $USERNAME:ros | chpasswd'
ENV HOME /home/$USERNAME
RUN apt-get update && apt-get install --assume-yes wget sudo && \
wget https://raw.githubusercontent.com/ROBOTIS-GIT/robotis_tools/master/install_ros_kinetic.sh && \
chmod 755 ./install_ros_kinetic.sh && \
bash ./install_ros_kinetic.sh
RUN apt-get install --assume-yes ros-kinetic-joy ros-kinetic-teleop-twist-joy ros-kinetic-teleop-twist-keyboard ros-kinetic-laser-proc ros-kinetic-rgbd-launch ros-kinetic-depthimage-to-laserscan ros-kinetic-rosserial-arduino ros-kinetic-rosserial-python ros-kinetic-rosserial-server ros-kinetic-rosserial-client ros-kinetic-rosserial-msgs ros-kinetic-amcl ros-kinetic-map-server ros-kinetic-move-base ros-kinetic-urdf ros-kinetic-xacro ros-kinetic-compressed-image-transport ros-kinetic-rqt-image-view ros-kinetic-gmapping ros-kinetic-navigation ros-kinetic-interactive-markers
RUN cd /home/$USERNAME/catkin_ws/src/
RUN git clone https://github.com/ROBOTIS-GIT/turtlebot3_msgs.git
RUN git clone https://github.com/ROBOTIS-GIT/turtlebot3.git
USER $USERNAME
WORKDIR /home/$USERNAME
# add catkin env
RUN echo 'source /opt/ros/kinetic/setup.bash' >> /home/$USERNAME/.bashrc
RUN echo 'source /home/$USERNAME/catkin_ws/devel/setup.bash' >> /home/$USERNAME/.bashrc
RUN /bin/bash -c "source /home/ros/.bashrc && cd /home/$USERNAME/catkin_ws && catkin_make"
Gave the following output:
~/m/rosdocker docker build --rm -f "Dockerfile" -t rosdocker:latest .
Sending build context to Docker daemon 5.632kB
Step 1/15 : FROM ubuntu:16.04
---> b0ef3016420a
Step 2/15 : ENV USERNAME ros
---> Using cache
---> 25bf14574e2b
Step 3/15 : RUN adduser --ingroup sudo --disabled-password --gecos "" --shell /bin/bash --home /home/$USERNAME $USERNAME
---> Using cache
---> 3a2787196745
Step 4/15 : RUN bash -c 'echo $USERNAME:ros | chpasswd'
---> Using cache
---> fa4bc1d220a8
Step 5/15 : ENV HOME /home/$USERNAME
---> Using cache
---> f987768fa3b1
Step 6/15 : RUN apt-get update && apt-get install --assume-yes wget sudo && wget https://raw.githubusercontent.com/ROBOTIS-GIT/robotis_tools/master/install_ros_kinetic.sh && chmod 755 ./install_ros_kinetic.sh && bash ./install_ros_kinetic.sh
---> Using cache
---> 9c26b8318f2e
Step 7/15 : RUN apt-get install --assume-yes ros-kinetic-joy ros-kinetic-teleop-twist-joy ros-kinetic-teleop-twist-keyboard ros-kinetic-laser-proc ros-kinetic-rgbd-launch ros-kinetic-depthimage-to-laserscan ros-kinetic-rosserial-arduino ros-kinetic-rosserial-python ros-kinetic-rosserial-server ros-kinetic-rosserial-client ros-kinetic-rosserial-msgs ros-kinetic-amcl ros-kinetic-map-server ros-kinetic-move-base ros-kinetic-urdf ros-kinetic-xacro ros-kinetic-compressed-image-transport ros-kinetic-rqt-image-view ros-kinetic-gmapping ros-kinetic-navigation ros-kinetic-interactive-markers
---> Using cache
---> 4b4c0abace7f
Step 8/15 : RUN cd /home/$USERNAME/catkin_ws/src/
---> Using cache
---> fb87caedbef8
Step 9/15 : RUN git clone https://github.com/ROBOTIS-GIT/turtlebot3_msgs.git
---> Using cache
---> d2d7f198e018
Step 10/15 : RUN git clone https://github.com/ROBOTIS-GIT/turtlebot3.git
---> Using cache
---> 42ddcbbc19e1
Step 11/15 : USER $USERNAME
---> Using cache
---> 4526fd7b5d75
Step 12/15 : WORKDIR /home/$USERNAME
---> Using cache
---> 0543c327b994
Step 13/15 : RUN echo 'source /opt/ros/kinetic/setup.bash' >> /home/$USERNAME/.bashrc
---> Using cache
---> dff40263114a
Step 14/15 : RUN echo 'source /home/$USERNAME/catkin_ws/devel/setup.bash' >> /home/$USERNAME/.bashrc
---> Using cache
---> fff611e9d9db
Step 15/15 : RUN /bin/bash -c "source /home/ros/.bashrc && cd /home/$USERNAME/catkin_ws && catkin_make"
---> Running in 7f26a34419a3
/bin/bash: catkin_make: command not found
The command '/bin/sh -c /bin/bash -c "source /home/ros/.bashrc && cd /home/$USERNAME/catkin_ws && catkin_make"' returned a non-zero code: 127
~/m/rosdocker
I need it to run catkin_make (which is on the path set up by .bashrc)
Exit code 127 from shell commands means "command not found". Is .bashrc executable? Normally it is not, probably you want to source it?
source ./home/$USERNAME/.bashrc
As Dan Farrel pointed out in his comment, sourcing the file in a RUN command will only have effect within that shell.
To source .bashrc during the build
If you want it to have effect for later commands in the build you need to run them all in the same RUN statement. In the below .bashrcis sourced in the same shell as catkin_make is run.
RUN . /home/ros/.bashrc && \
cd /home/$USERNAME/catkin_ws && \
catkin_make
To source the .bashrc file when the container starts
What should happen when the container is run using docker runis specified using the ENTRYPOINTstatement. If you just want a plain bash prompt, specify /bin/bash. The shell will be run with the user specified in the USER statement.
So in summary if you add the following to the end of your Dockerfile
USER ros
ENTRYPOINT /bin/bash
When someone runs the container using docker run -it <containerName> they will land in a bash shell as the user ros. Bash will automatically source the /home/ros/.bashrc file and all definitions inside will be available in the shell. (Your RUN statement containing the .bashrc file canbe removed

Setting up our Rasa/NLU container, error?

I have this file Dockerfile.nlu
FROM chatbot/spacy:latest
WORKDIR /app
COPY nlu ./agent_nlu
RUN python –m rasa_nlu.train --config agent_nlu/config.yml --data agent_nlu/data/ --path agent_nlu/agent --fixed_model_name default
and I get the error below:
]$ sudo docker build -t nlu:latest -f docker/Dockerfile.nlu .
Sending build context to Docker daemon 9.216kB
Step 1/4 : FROM chatbot/spacy:latest
---> 496dc6a38abb
Step 2/4 : WORKDIR /app
---> Using cache
---> 7f02012c8452
Step 3/4 : COPY nlu ./agent_nlu
COPY failed: stat /var/lib/docker/tmp/docker-builder363868051/nlu: no such file or directory
It doesn't look like Docker can find the nlu directory. Are you sure it exists? Are you sure that you are executing the command from the correct directory?
But you also aren't installing Rasa at all or any of it's requirements. Is there a reason you aren't using the pre-built Rasa images? available here with docs here.
Here is a fully functional Docker file pulled from their repo.
FROM python:3.6-slim
ENV RASA_NLU_DOCKER="YES" \
RASA_NLU_HOME=/app \
RASA_NLU_PYTHON_PACKAGES=/usr/local/lib/python3.6/dist-packages
# Run updates, install basics and cleanup
# - build-essential: Compile specific dependencies
# - git-core: Checkout git repos
RUN apt-get update -qq \
&& apt-get install -y --no-install-recommends build-essential git-core openssl libssl-dev libffi6 libffi-dev curl \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
WORKDIR ${RASA_NLU_HOME}
COPY . ${RASA_NLU_HOME}
# use bash always
RUN rm /bin/sh && ln -s /bin/bash /bin/sh
RUN pip install -r alt_requirements/requirements_spacy_sklearn.txt
RUN pip install -e .
RUN pip install https://github.com/explosion/spacy-models/releases/download/en_core_web_md-2.0.0/en_core_web_md-2.0.0.tar.gz --no-cache-dir > /dev/null \
&& python -m spacy link en_core_web_md en \
&& pip install https://github.com/explosion/spacy-models/releases/download/de_core_news_sm-2.0.0/de_core_news_sm-2.0.0.tar.gz --no-cache-dir > /dev/null \
&& python -m spacy link de_core_news_sm de
COPY sample_configs/config_spacy.yml ${RASA_NLU_HOME}/config.yml
VOLUME ["/app/projects", "/app/logs", "/app/data"]
EXPOSE 5000
ENTRYPOINT ["./entrypoint.sh"]
CMD ["start", "-c", "config.yml", "--path", "/app/projects"]

Resources