when execute shell file in docker container, binary file error - docker

I'm trying to execute shell file that contains python script.
But, I don't know why i met the error like this
file directory structure
/home/kwstat/workplace/analysis/report_me
home
kwstat
workplace
analysis
report_me
report_me.sh
python_file
python_code.py
...
$docker exec -it test /bin/bash -c "source /home/kwstat/workplace/analysis/report_me/report_me.sh"
# Error
/home/kwstat/workplace/analysis/report_me/report_me.sh: line 30: source: /usr/local/bin/python: cannot execute binary file
I tried several things in Dockerfile, But same error occured.
# 1.CMD ["/bin/bash","-l","-c"]
CMD ["/bin/bash","-l","-c"]
# 2. CMD bin/bash
CMD bin/bash
#########My Dockerfile#############
FROM continuumio/miniconda3
# System packages
RUN apt-get update && apt-get install -y curl
RUN apt-get update && apt-get install -y subversion
WORKDIR /home/kwstat/workplace/analysis/report_me
COPY environments.yml /home/kwstat/workplace/analysis/report_me/environments.yml
RUN conda env create -f /home/kwstat/workplace/analysis/report_me/environments.yml
# Make RUN commands use the new environment:
SHELL ["conda", "run", "-n", "myenv", "/bin/bash", "-c"]
RUN echo "conda activate my_env" >> ~/.profile
# Activate the environment, and make sure it's activated:
#RUN echo "Make sure flask is installed:"
COPY requirements.txt /home/kwstat/me_report_dockerfile/requirements.txt
RUN pip install -r /home/kwstat/me_report_dockerfile/requirements.txt
WORKDIR /home/kwstat/workplace/analysis/report_me/python_file
COPY python_file/ /home/kwstat/workplace/analysis/report_me/python_file
WORKDIR /home/kwstat/workplace/analysis/report_me/
COPY report_me.sh ./report_me.sh
RUN chmod +x report_me.sh
CMD ["/bin/bash"]
please any help ~

my problem was from shell script.
Inside the shell set the coda env path
and all solved.

Related

Docker RUN command near end of Dockerfile ... boots into container unless I give a CMD at the end but doesn't work either way. Any ideas?

I am working on a Dockerfile to be used with Google Cloud Run.
I'm not getting the command to run.
Here's the (slightly obfuscated) Dockerfile:
FROM gcr.io/google.com/cloudsdktool/google-cloud-cli:latest
RUN apt-get update
RUN pip install --upgrade pip
COPY requirements.txt /root/
RUN pip install -r /root/requirements.txt
RUN useradd -m ubuntu
ENV HOME=/home/ubuntu
USER ubuntu
COPY --chown=ubuntu:ubuntu . /home/ubuntu
WORKDIR /home/ubuntu
RUN gcloud config set project our-customer-tech-sem-prod
RUN gcloud auth activate-service-account --key-file=./service/our-customer-tech-sem-prod-a02b2c7f4536.json
RUN gcloud compute config-ssh
ENV GOOGLE_APPLICATION_CREDENTIALS=./service/our-customer-tech-sem-prod-a02b2c7f4536.json
CMD ["gcloud", "compute", "ssh", "--internal-ip", "our-persist-cluster-py3-prod", "--zone=us-central1-b", "--project", "our-customer-tech-sem-prod", "--", "'ps -ef'", "|", "./checker2.py"]
This tries to run the CMD at the end, but says it can't find the host specified. (Runs fine from the command line outside Docker.)
There were a couple of things wrong at the end (1) typo in the host name (fixed with the help of a colleague) ... then I had to make the CMD into a shell script to get the pipe inside to work correctly.
Here's the final (slightly obfuscated) script:
FROM gcr.io/google.com/cloudsdktool/google-cloud-cli:latest
RUN apt-get update
RUN pip install --upgrade pip
COPY requirements.txt /root/
RUN pip install -r /root/requirements.txt
RUN useradd -m ubuntu
RUN mkdir /secrets
COPY secrets/* /secrets
ENV HOME=/home/ubuntu
USER ubuntu
COPY --chown=ubuntu:ubuntu . /home/ubuntu
WORKDIR /home/ubuntu
RUN gcloud config set project our-customer-tech-sem-prod
RUN gcloud auth activate-service-account --key-file=./service/our-customer-tech-sem-prod-a02b2c7f4536.json
RUN gcloud compute config-ssh
ENV GOOGLE_APPLICATION_CREDENTIALS=./service/our-customer-tech-sem-prod-a02b2c7f4536.json
CMD ["./rungcloud.sh"]

Docker Environment variable not working in CMD

i'm trying to pass the name of the script from the docker run but its not getting the script name in the cmd command.
Not sure what's wrong here, same thing works fine in springboot/java projects
Below is the docker file
FROM python:3.8.8
RUN apt-get update && apt-get upgrade -y && \
apt-get install -y nodejs
RUN apt-get install -y npm
WORKDIR /rubix-kyc
COPY . /rubix-kyc
RUN pip install -r /rubix-kyc/requirements.txt
ARG SCRIPT_NAME
ENV SCRIPT_NAME ${SCRIPT_NAME}
RUN mkdir -p video_recording/
RUN npm install
RUN npm install elastic-apm-node --save
EXPOSE 4443
CMD [ "npm", "run" , "${SCRIPT_NAME}"]
Updating the script for running docker.
docker run \
-e SCRIPT_NAME=start-local \
-p 4443:4443 $1
Need you help here
For the variables used in CMD it is important to pass it as environment variable on docker run besides defining with ARG and assigning with ENV in Dockerfile, as it is evaluated on runtime, e.g.:
docker run --rm -ti -e SCRIPT_NAME=value-of-script-name <docker-image-id>
Please adjust your Dockerfile as well:
ARG SCRIPT_NAME
ENV SCRIPT_NAME=$SCRIPT_NAME
...
CMD npm run $SCRIPT_NAME

Dockerfile - Not all commands in CMD are running

When I create an image from the following Dockerfile, the command poetry run python manage.py setstaticpages is skipped (and thus not run). Why is this happening?
NOTE: I've tried to run aforementioned command from inside of shell, and it worked perfectly. However, I need that it is executed when container is built.
# Define Image
FROM python:3.8
# Set Environment Variable
ENV PYTHONUNBUFFERED 1
ENV C_FORCE_ROOT true
# Making source and static directory
RUN mkdir /src
RUN mkdir /static
# Creating Work Directory
WORKDIR /src
# Update pip
RUN pip install --upgrade pip
COPY ./src/poetry.lock /scripts/
COPY ./src/pyproject.toml /scripts/
RUN pip install poetry
CMD ["sh", "-c", "poetry install; poetry run python manage.py collectstatic --no-input; poetry run python manage.py migrate;", "poetry run python manage.py setstaticpages;", "poetry run gunicorn -w 4 -t 180 -b 0.0.0.0:8000 backend.wsgi:application"]
For chaining commands use either a bash-script and call that in CMD or chain up commands inside shell with &&
CMD [ "sh" , "-c" , "whatever && whatever_next && whatever_last" ]
or
ADD docker-entrypoint.sh /
RUN chmod +x /docker-entrypoint.sh
CMD ["/docker-entrypoint.sh"]
#!/bin/bash
#docker-entrypoint.sh
whatever
whatever_next
whatever_last
This may not solve your problem, but this way you'll get proper signal handling by running in one process. So if it still does not work add logs to your question please.

How to run specific file first before anything else happens in docker run

I have following docker file.
MAINTANER Your Name "youremail#domain.tld"
RUN apt-get update -y && \
apt-get install -y python-pip python-dev
# We copy just the requirements.txt first to leverage Docker cache
COPY ./requirements.txt /app/requirements.txt
WORKDIR /app
RUN pip install -r requirements.txt
COPY . /app
ENTRYPOINT [ "python" ]
CMD [ "app.py" ]
There is one file that I want to run even before app.py file runs. How can I achieve that? I don't want to put code inside app.py.
One solution could be to use a docker-entrypoint.sh script. Basically that entrypoint would allow you to define a set of commands to run to initialize your program.
For example, I could create the following docker-entrypoint.sh:
#!/bin/bash
set -e
if [ "$1" = 'app' ]; then
sh /run-my-other-file.sh
exec python app.py
fi
exec "$#"
And I would use it as so in my Dockerfile:
FROM alpine
COPY ./docker-entrypoint.sh /
ENTRYPOINT ["/docker-entrypoint.sh"]
CMD ["app"]
There is a lot of articles and examples online about docker-entrypoints. You should give it a quick search I am sure you will find a lot of interesting examples that are used by famous production grade containers.
You could try creating a bash file called run.sh and put inside
app.py
try changing the CMD
CMD [ "run.sh" ]
Also make sure permission in executable for run.sh

Dockerfile supervisord cannot find path

For some reason supervisord cannot start up when executing docker run... If I log out the path where the configuration is stored for supervisord I can clearly see that the file is present.
Below is the part of my Dockerfile thats not currently commented out.
FROM ubuntu:16.04
MAINTAINER Kevin Gilbert
# Update Packages
RUN apt-get -y update
# Install basics
RUN apt-get -y install curl wget make gcc build-essential
# Setup Supervisor
RUN apt-get -y install supervisor
RUN mkdir -p /var/log/supervisor
COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf
CMD ["/usr/bin/supervisord", "-c /etc/supervisor/conf.d/supervisord.conf"]
Here is the error I get in terminal after running.
remote-testing:analytics-portal kgilbert$ docker run kmgilbert/portal
Error: could not find config file /etc/supervisor/conf.d/supervisord.conf
For help, use /usr/bin/supervisord -h
Try with the exec form of CMD:
CMD ["/usr/bin/supervisord", "-c", "/etc/supervisor/conf.d/supervisord.conf"]
or with the shell form
CMD /usr/bin/supervisord -c /etc/supervisor/conf.d/supervisord.conf
Depending on the OS used by the base image, you might not even have to specify the supervisord.conf in the command line (see this example, or the official documentation)
It happended to me on Alpine linux 3.9, but eventually ran successfully with
CMD ["supervisord", "-c", "<path_to_conf_file>"]

Resources