Cannot run installed tool in Dockerfile even though its there - docker

I installed diesel-cli in a Dockerfile:
FROM alpine:latest
ENV PATH="/root/.cargo/bin:${PATH}"
RUN apk update
RUN apk add postgresql curl gcc musl-dev libpq-dev bash
RUN curl https://sh.rustup.rs -sSf | sh -s -- -y
WORKDIR /app
RUN cargo install diesel_cli --no-default-features --features postgres
COPY . .
EXPOSE 8000
CMD [ "docker/entrypoint.sh"]
That works fine. The entrypoint.sh is:
#!/bin/bash
export PATH="/root/.cargo/bin:${PATH}"
ls /root/.cargo/bin/diesel
bash -c "/root/.cargo/bin/diesel setup"
The strange this is that the ls shows that the diesel binary is there. But when running the docker container it still says:
bash: line 1: /root/.cargo/bin/diesel: No such file or directory
I also tried calling diesel right from the Dockerfile with the same result.
Why can't I run diesel this way?

See comment by The Fool!
Using a different base image resolves the problem:
FROM debian:bullseye-slim
ENV PATH="/root/.cargo/bin:${PATH}"
RUN apt update -y
RUN apt install postgresql curl gcc libpq-dev bash -y
RUN curl https://sh.rustup.rs -sSf | sh -s -- -y
WORKDIR /app
# This may take a minute
RUN cargo install diesel_cli --no-default-features --features postgres
COPY . .
# provision the database
EXPOSE 8000
CMD [ "docker/entrypoint.sh"]

Related

Docker Environment variable not working in CMD

i'm trying to pass the name of the script from the docker run but its not getting the script name in the cmd command.
Not sure what's wrong here, same thing works fine in springboot/java projects
Below is the docker file
FROM python:3.8.8
RUN apt-get update && apt-get upgrade -y && \
apt-get install -y nodejs
RUN apt-get install -y npm
WORKDIR /rubix-kyc
COPY . /rubix-kyc
RUN pip install -r /rubix-kyc/requirements.txt
ARG SCRIPT_NAME
ENV SCRIPT_NAME ${SCRIPT_NAME}
RUN mkdir -p video_recording/
RUN npm install
RUN npm install elastic-apm-node --save
EXPOSE 4443
CMD [ "npm", "run" , "${SCRIPT_NAME}"]
Updating the script for running docker.
docker run \
-e SCRIPT_NAME=start-local \
-p 4443:4443 $1
Need you help here
For the variables used in CMD it is important to pass it as environment variable on docker run besides defining with ARG and assigning with ENV in Dockerfile, as it is evaluated on runtime, e.g.:
docker run --rm -ti -e SCRIPT_NAME=value-of-script-name <docker-image-id>
Please adjust your Dockerfile as well:
ARG SCRIPT_NAME
ENV SCRIPT_NAME=$SCRIPT_NAME
...
CMD npm run $SCRIPT_NAME

How to run sudo commands in Docker?

I'm trying to build a docker container containing Sqlite3 and Flask. But Sqlite isn't getting installed because sudo needs a password. How is this problem solved?
The error:
Step 6/19 : RUN sudo apt-get install -y sqlite3
---> Running in 9a9c8f8104a8
sudo: a terminal is required to read the password; either use the -S option to read from standard input or configure an askpass helper
The command '/bin/sh -c sudo apt-get install -y sqlite3' returned a non-zero code: 1
The Dockerfile:
FROM ubuntu:latest
RUN apt-get -y update && apt-get -y install sudo
RUN useradd -m docker && echo "docker:docker" | chpasswd && adduser docker sudo
USER docker
CMD /bin/bash
RUN sudo apt-get install -y sqlite3
RUN mkdir /db
RUN /usr/bin/sqlite3 /db/test.db
CMD /bin/bash
RUN sudo apt-get install -y python
WORKDIR /usr/src/app
ENV FLASK_APP=__init__.py
ENV FLASK_DEBUG=1
ENV FLASK_RUN_HOST=0.0.0.0
ENV FLASK_ENV=development
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
COPY . .
CMD ["flask", "run"]
sudo is not necessary as you can install everything before switching users.
You should think of consistent layers.
Each version of your image should replace only delta parts.
https://docs.docker.com/develop/develop-images/dockerfile_best-practices/
Please find below an example of what you could use instead of the provided dockerfile.
The idea is that you install dependencies and then run some configuration commands.
Be aware that CMD can be replaced at runtime.
docker run myimage <CMD>
# Base image, based on python installed on debian
FROM python:3.9-slim-bullseye
# Arguments used to run the app
ARG user=docker
ARG group=docker
ARG uid=1000
ARG gid=1000
ARG app_home=/usr/src/app
ARG sql_database_directory=/db
ARG sql_database_name=test.db
# Environment variables, user defined
ENV FLASK_APP=__init__.py
ENV FLASK_DEBUG=1
ENV FLASK_RUN_HOST=0.0.0.0
ENV FLASK_ENV=development
# Install sqlite
RUN apt-get update \
&& apt-get install -y sqlite3 \
&& apt-get clean
# Create app user
RUN mkdir -p ${app_home} \
&& chown ${uid}:${gid} ${app_home} \
&& groupadd -g ${gid} ${group} \
&& useradd -d "${app_home}" -u ${uid} -g ${gid} -s /bin/bash ${user}
# Create sql database directory
RUN mkdir -p ${sql_database_directory} \
&& chown ${uid}:${gid} ${sql_database_directory}
# Switch to user defined by arguments
USER ${user}
RUN /usr/bin/sqlite3 ${sql_database_directory}/${sql_database_name}
# Copy & Run application (by default)
WORKDIR ${app_home}
COPY . .
RUN pip install --no-cache-dir --no-warn-script-location -r requirements.txt
CMD ["python", "-m", "flask", "run"]

Private docker container to release

I am using a Dockerfile multistage configuration similar to the one below.
FROM swift:4.1
WORKDIR /app
COPY . .
RUN swift build --configuration release && mv `swift build -c release --show-bin-path` /build/bin
FROM ubuntu:16.04
RUN apt-get -qq update && apt-get install -y \
libicu55 libxml2 libbsd0 libcurl3 libatomic1 wget && rm -r /var/lib/apt/lists/*
RUN /bin/bash -c "$(wget -qO- https://apt.vapor.sh)"
RUN wget -q https://repo.vapor.codes/apt/keyring.gpg -O- | apt-key add -
RUN apt-get update && apt-get install swift vapor -y
WORKDIR /app
COPY --from=builder /build/bin .
COPY --from=builder /build/lib/* /usr/lib/
EXPOSE 3000
ENTRYPOINT ./Run serve -e prod -b 0.0.0.0 -p 3000
I am currently using this to deploy my service in a virtual server, which due to its low performance takes forever to build the project.
Is it a good practice, and possible, to build and upload to a private repo in docker hub the image result of the builder, so I can do it from my local machine?
Could I then just have the second step in my virtual server? That means:
FROM myPrivateImageBuiltLocally as image
WORKDIR /app
COPY . .
FROM ubuntu:16.04
RUN apt-get -qq update && apt-get install -y \
libicu55 libxml2 libbsd0 libcurl3 libatomic1 wget && rm -r /var/lib/apt/lists/*
RUN /bin/bash -c "$(wget -qO- https://apt.vapor.sh)"
RUN wget -q https://repo.vapor.codes/apt/keyring.gpg -O- | apt-key add -
RUN apt-get update && apt-get install swift vapor -y
WORKDIR /app
COPY --from=builder /build/bin .
COPY --from=builder /build/lib/* /usr/lib/
EXPOSE 3000
ENTRYPOINT ./Run serve -e prod -b 0.0.0.0 -p 3000
Yes you can do it. You don't have to build it locally. You can use the automated build feature of dockerhub. It works like this.
1). Push the code to github/bitbucket
2). Create new image in dockerhub and map to the github repo
This will automatically build the image each time when you push a new commit to the github repo.
You can also see all the stats like build logs, Succss or failure, number of downloads etc...
ref: https://docs.docker.com/docker-cloud/builds/automated-build/#configure-automated-build-settings

How do i launch my jenkins-cli through docker?

i have created the docker image that installs java,jenkins,jenkins-cli. now i need to pass some argument through jenkins-cli, so i need to launch jenkins-cli. How do i do it? i have no idea how it launches.
Here is my script
FROM ubuntu:14.04
RUN apt update; \
apt upgrade -y; \
apt install -y default-jdk curl wget git maven nano unzip; \
apt-get clean
ENV JAVA_HOME /usr
ENV PATH $JAVA_HOME/bin:$PATH
RUN apt-get autoclean $$ apt-get clear cache
RUN apt-get -yqq update
RUN apt-get -yqq --no-install-recommends install git bzip2 curl unzip
RUN apt-get update
# copy jenkins war file to the container
ADD http://mirrors.jenkins.io/war-stable/2.107.1/jenkins.war /opt/jenkins.war
RUN chmod 644 /opt/jenkins.war
ENV JENKINS_HOME /jenkins
# configure the container to run jenkins, mapping container port 8080 to that host port
ENTRYPOINT ["java", "-jar", "/opt/jenkins.war"]
EXPOSE 8080
RUN mkdir /jenkins/
RUN echo 2.107.1 > /jenkins/jenkins.install.UpgradeWizard.state
RUN echo 2.107.1 > /jenkins/jenkins.install.InstallUtil.lastExecVersion
#jenkin-cli installation
RUN cd /tmp && curl --insecure -OL http://192.168.99.100:8080/jnlpJars/jenkins-cli.jar
ADD /tmp/jenkins-cli.jar /opt/jenkins/jenkins-cli.jar
RUN chmod 644 /opt/jenkins-cli.jar
WORKDIR /opt/jenkins
ENTRYPOINT ["java", "-jar", "jenkins-cli.jar", "-noCertificateCheck", "-noKeyAuth"]
CMD ["--help"]
this is error:
Step 19/24 : RUN cd /tmp && curl --insecure -OL http://192.168.99.100:8080/jnlpJars/jenkins-cli.jar
---> Using cache
---> 9a6210009f84
Step 20/24 : ADD jenkins-cli.jar /opt/jenkins/jenkins-cli.jar
ADD failed: stat /mnt/sda1/var/lib/docker/tmp/docker-builder945232568/jenkins-cli.jar: no such file or directory[1]
This error is causing me for may other build too,if i can find best solution i can solve my all other problems.
2nd question is, is my scripting for my current problem is right? How can i modify?
Can anyone help me with this..?
thank u in advance
You forgot to create the /opt/jenkins folder
Change
ADD http://mirrors.jenkins.io/war-stable/2.107.1/jenkins.war /opt/jenkins.war
to
RUN mkdir /opt/jenkins
ADD http://mirrors.jenkins.io/war-stable/2.107.1/jenkins.war /opt/jenkins/jenkins.war
Given your cli tool is available from within your container with a simple command such as cli-tool_command this should work:
docker run --rm -it {container_image_name} {cli_tool_command} {cli_tool_args}

docker-compose: Service 'web' failed to build

I'm trying to install apach2, libapache2-mod-wsgi-py3 and openssl in the container. I've removed some packages and fix typos in Dockerfile but the error is still there.
When i run docker-compose build my setup is running ok, until it hit the part in the Dockerfile where I'm initializing this install and I've got this error:
E: Unable to locate package RUN
E: Unable to locate package apt-get
E: Unable to locate package install
ERROR: Service 'web' failed to build: The command '/bin/sh -c apt-get update && apt-get install -y apache2 libapache2-mod-wsgi-py3 curl dpgk-sig RUN apt-get install -yq openssh-server' returned a non-zero code: 100
You can check the whole installation process here, and this is my Dockerfile:
FROM ubuntu:16.04
FROM python:3.5
ENV PYTHONUNBUFFERED 1
RUN cat /etc/passwd
RUN cat /etc/group
RUN apt-get update && apt-get install -y \
apache2 \
libapache2-mod-wsgi-py3 \
RUN apt-get install -y openssl
RUN mkdir /var/run/sshd
ENV APACHE_RUN_USER www-data
ENV APACHE_RUN_GROUP www-data
ENV APACHE_LOG_DIR /var/log/apache
RUN mkdir /code
WORKDIR /code
ADD requirements.txt /code/
RUN pip install -r requirements.txt
ADD . /code
EXPOSE 80
ADD config/apache/000-default.conf /etc/apache/sites-available/000-default.conf
ADD config/start.sh /tmp/start.sh
ADD src /var/www
RUN chown -R root:www-data /var/www
RUN chmod u+rwx,g+rx,o+rx /var/www
RUN find /var/www -type d -exec chmod u+rwx,g+rx,o+rx {} +
RUN find /var/www -type f -exec chmod u+rw,g+rw,o+r {} +
#essentially: CMD ["/usr/sbin/apachectl", "-D", "FOREGROUND"]
CMD ["/tmp/start.sh"]
Can someone explain me why is this happening, and how to fix it, thanks.
Your problem is this line:
libapache2-mod-wsgi-py3 \
The \ is a continuation and the next thing it sees is RUN so is treating that like a package (which it can't find). Lose the \ and it should work fine.

Resources