Docker run Cron in a container with other service using proxy - docker

I have a django application in a docker container. I built the image using the
docker build --build-arg http_proxy=$http_proxy \
--build-arg https_proxy=$https_proxy \
--build-arg no_proxy=$no_proxy \
-t <tag> .
And I have my proxy variables set in my current terminal session using
export http_proxy=http://user:pass#proxy.company.com:8099/
export https_proxy=http://user:pass#proxy.company.com:8099/
export no_proxy=*.local,localhost,169.254.169.254,*.abc.company.com,*.cloud.company.com
Below is the Dockerifle:
FROM artifactory.cloud.company.com/amazonlinux:2.0.20181010
ENV PIP_INDEX_URL https://artifactory.cloud.company.com/artifactory/api/pypi/pypi-internalfacing/simple/
RUN yum install -y python3 python3-devel python3-setuptools python3-pip git gcc
RUN pip3 install --upgrade --trusted-host artifactory.cloud.company.com pip setuptools
RUN amazon-linux-extras install nginx1.12
RUN yum install -y python2-pip
RUN pip2 install supervisor -i https://artifactory.cloud.company.com/artifactory/api/pypi/pypi-internalfacing/simple/ --trusted-host artifactory.cloud.company.com
RUN pip3 install uwsgi
RUN pip3 install django requests python-decouple
RUN mkdir -p /ASVDASHBOARD
# Application folder on the server with absolute path.
ADD ./ASVDASHBOARD /ASVDASHBOARD
WORKDIR /ASVDASHBOARD
RUN mkdir -p /etc/supervisor/
RUN cp ASVDASHBOARD_nginx.conf /etc/nginx/conf.d/default.conf
RUN cp ASVDASHBOARD_supervisor.conf /etc/supervisor/supervisord.conf
#RUN cp ASVDASHBOARD_supervisor.conf /etc/supervisord.d/
RUN chown -R nginx:nginx /ASVDASHBOARD && \
mkdir -p /ASVDASHBOARD/logs/ && \
touch /ASVDASHBOARD/logs/dashboard.log
RUN python3 manage.py makemigrations && \
python3 manage.py migrate --run-syncdb && \
python3 manage.py migrate
RUN python3 manage.py update_data
RUN mkdir /run/uwsgi && chmod -R 777 /run/uwsgi
EXPOSE 8080
CMD ["supervisord", "-c", "/etc/supervisor/supervisord.conf"]
I also run the container using
docker run -it -d --name final_dashboardd -e http_proxy -e https_proxy -e no_proxy -p 8080:8080 py37:v3
Now, I want to run a cron job in the container to run this command python3 /pathtofile/manage.py update_data. To just run manually, I will have to attach a shell terminal to the container using exec, set the proxy and then run the command, it works fine.
How do I set/pass proxy to run this cron job now.
I tried
*/1 * * * * python3 /pathtofile/manage.py update_data
Which didnt work. How to pass proxy here, Since I am attached to a terminal and have my proxy variables setup. it works But how to setup proxy for cron.

Related

conda: not found - Docker container do not run export PATH=~/miniconda3/bin:$PATH

I have the following Dockerfile:
FROM --platform=linux/x86_64 nvidia/cuda:11.7.0-devel-ubuntu20.04
COPY app ./app
ARG DEBIAN_FRONTEND=noninteractive
RUN apt-get update && apt-get -y upgrade && apt-get install -y apt-utils
RUN apt-get install -y \
net-tools iputils-ping \
build-essential cmake git \
curl wget vim \
zip p7zip-full p7zip-rar \
imagemagick ffmpeg \
libomp5
# RUN wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
COPY Miniconda3-latest-Linux-x86_64.sh .
RUN chmod guo+x Miniconda3-latest-Linux-x86_64.sh
RUN bash Miniconda3-latest-Linux-x86_64.sh -b -p ~/miniconda3
RUN export PATH=~/miniconda3/bin:$PATH
RUN conda --version
RUN conda update -n base conda
RUN conda create -y --name servier python=3.6
RUN conda activate servier
RUN conda install -c conda-forge rdkit
CMD ["bash"]
When I run: docker image build -t image_test_cuda2 . it breaks in the RUN conda --version.
The error is RUN conda --version: ... /bin/sh: 1: conda: not found. The problem is that RUN export PATH=~/miniconda3/bin:$PATH is not working. It is not creating conda link in the PATH.
If I build the image until RUN bash Miniconda3-latest-Linux-x86_64.sh -b -p ~/miniconda3 and manually I get access to the container using docker exec -it <id> /bin/bash and then from the #/ manually I run the same command #/export PATH=~/miniconda3/bin:$PATH it works good. If I manually run the next command inside the container RUN conda update -n base conda it works good.
The conclusion is that it seems that the command RUN export PATH=~/miniconda3/bin:$PATH is not working in Dockerfile - docker image build. How to solve this issue?

docker returns err_empty_response when backend script takes too long

I am running a FastAPI application in docker. The backend is composed of multiple .py scripts, which train several machine learning models. The FastAPI returns the results. I have docker running and everything is just fine. However, when the modeling takes longer (by using several hyperparameter search loops), I receive an err_empty_response from my dockerized App. Without docker everything is fine. I suppose, it is some timeout issue.
I have added "shutdown-timeout": 600 in the config.v2.json file in var/lib/docker/containers (I am on ubuntu 18.04), but this did not help.
This is my dockerfile:
FROM ubuntu:18.04
ARG DEBIAN_FRONTEND=noninteractive
ENV TZ=Europe/Moscow
RUN apt-get update && apt-get install -y curl wget gcc build-essential
#install python 3.9
RUN apt update
RUN apt install software-properties-common -y
RUN add-apt-repository ppa:deadsnakes/ppa
RUN apt install python3.9 -y
RUN ln -s /usr/bin/pip3 /usr/bin/pip
RUN ln -s /usr/bin/python3.9 /usr/bin/python
# install conda
RUN wget --quiet https://repo.anaconda.com/miniconda/Miniconda3-4.5.12-Linux-x86_64.sh -O ~/miniconda.sh && \
/bin/bash ~/miniconda.sh -b -p /opt/conda
# create env with python 3.9
RUN /opt/conda/bin/conda create -y -n myenv python=3.9
RUN apt install -y -q build-essential python3-pip python3-dev
RUN pip3 install -U pip setuptools wheel
#install python environment/libraries
COPY requirements.txt /app/requirements.txt
ENV PATH=/opt/conda/envs/myenv/bin:$PATH
RUN pip3 install gunicorn uvloop httptools
RUN pip3 install -r /app/requirements.txt
RUN pip3 install -U kaleido
COPY projectfolder/ /projectfolder/
RUN ls -la /projectfolder/*
WORKDIR /projectfolder
EXPOSE 80
ENTRYPOINT /opt/conda/envs/myenv/bin/gunicorn \
-b 0.0.0.0:80 \
-w 4 \
-k uvicorn.workers.UvicornWorker main:app \
--chdir /projectfolder
This is a sample FastAPI app, just for demo. The sleep time mimicks the timeout:
from fastapi import FastAPI
import uvicorn
app = FastAPI()
#app.get("/")
async def root():
time.sleep(150)
return {"message": "Hello World from docker"}
if __name__ == "__main__":
uvicorn.run(app)
I launch docker with
sudo docker build -t myproject .
sudo docker run -it --rm --name my-running-app -p 80:80 myproject
and open localhost in chrome.
So the question is: how can I extend the timeout, if this is the issue (most likely)?
Ok, after experimenting a bit, I found out that the issue was not with docker timeout, but with FastAPI. To change that, I modified the Entrypoint definiton in the dockerfile and set the timeout setting:
ENTRYPOINT /opt/conda/envs/myenv/bin/gunicorn \
-b 0.0.0.0:80 \
-w 4 \
--timeout 600 \
-k uvicorn.workers.UvicornWorker main:app

Run Python scripts on command line running Docker images

I built a docker image using Dockerfile with Python and some libraries inside (no my project code inside). In my local work dir, there are some scripts to be run on the docker. So, here what I did
$ cd /path/to/my_workdir
$ docker run -it --name test -v `pwd`:`pwd` -w `pwd` my/code:test python src/main.py --config=test --results-dir=/home/me/Results
The command python src/main.py --config=test --results-dir=/home/me/Results is what I want to run inside the Docker container.
However, it returns,
/home/docker/miniconda3/bin/python: /home/docker/miniconda3/bin/python: cannot execute binary file
How can I fix it and run my code?
Here is my Dockerfile
FROM nvidia/cuda:10.1-cudnn7-runtime-ubuntu18.04
MAINTAINER Me <me#me.com>
RUN apt update -yq && \
apt install -yq curl wget unzip git vim cmake sudo
RUN adduser --disabled-password --gecos '' docker && \
adduser docker sudo && \
echo '%sudo ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers
USER docker
WORKDIR /home/docker/
RUN chmod a+rwx /home/docker/ && \
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh && \
bash Miniconda3-latest-Linux-x86_64.sh -b && rm Miniconda3-latest-Linux-x86_64.sh
ENV PATH /home/docker/miniconda3/bin:$PATH
Run pip install absl-py==0.5.0 atomicwrites==1.2.1 attrs==18.2.0 certifi==2018.8.24 chardet==3.0.4 cycler==0.10.0 docopt==0.6.2 enum34==1.1.6 future==0.16.0 idna==2.7 imageio==2.4.1 jsonpickle==1.2 kiwisolver==1.0.1 matplotlib==3.0.0 mock==2.0.0 more-itertools==4.3.0 mpyq==0.2.5 munch==2.3.2 numpy==1.15.2 pathlib2==2.3.2 pbr==4.3.0 Pillow==5.3.0 pluggy==0.7.1 portpicker==1.2.0 probscale==0.2.3 protobuf==3.6.1 py==1.6.0 pygame==1.9.4 pyparsing==2.2.2 pysc2==3.0.0 pytest==3.8.2 python-dateutil==2.7.3 PyYAML==3.13 requests==2.19.1 s2clientprotocol==4.10.1.75800.0 sacred==0.8.1 scipy==1.1.0 six==1.11.0 sk-video==1.1.10 snakeviz==1.0.0 tensorboard-logger==0.1.0 torch==0.4.1 torchvision==0.2.1 tornado==5.1.1 urllib3==1.23
USER docker
ENTRYPOINT ["/bin/bash"]
Try making the file executable before running it.
as John mentioned to do in the dockerfile
FROM python:latest
COPY src/main.py /usr/local/share/
RUN chmod +x /usr/local/share/src/main.py #<-**--- just add this also
# I have some doubts about the pathing
CMD ["/usr/local/share/src/main.py", "--config=test --results-dir=/home/me/Results"]
You can run a python script in docker by adding this to your docker file:
FROM python:latest
COPY src/main.py /usr/local/share/
CMD ["src/main.py", "--config=test --results-dir=/home/me/Results"]

Permissions in Docker volume

I am struggling with permissions on docker volume, I get access denied for writing.
This is a small part of my docker file
FROM ubuntu:18.04
RUN apt-get update && \
apt-get install -y \
apt-transport-https \
build-essential \
ca-certificates \
curl \
vim && \............
RUN curl -sL https://deb.nodesource.com/setup_8.x | bash - && apt-get install -y nodejs
# Add non-root user
ARG USER=user01
RUN useradd -Um -d /home/$USER -s /bin/bash $USER && \
apt install -y python3-pip && \
pip3 install qrcode[pil]
#Copy that startup.sh into the scripts folder
COPY /scripts/startup.sh /scripts/startup.sh
#Making the startup.sh executable
RUN chmod -v +x /scripts/startup.sh
#Copy node API files
COPY --chown=user1 /node_api/* /home/user1/
USER $USER
WORKDIR /home/$USER
# Expose needed ports
EXPOSE 3000
VOLUME /data_storage
ENTRYPOINT [ "/scripts/startup.sh" ]
Also a small part of my startup.sh
#!/bin/bash
/usr/share/lib/provision.py --enterprise-seed $ENTERPRISE_SEED > config.json
Then my docker builds command:
sudo docker build -t mycontainer .
And the docker run command:
sudo docker run -v data_storage:/home/user01/.client -p 3008:3000 -itd mycontainer
The problem I have is that the Python script will create the folder: /home/user01/.client and it will copy some files in there. That always worked fine. But now I want those files, which are data files, in a volume for backup porpuses. And as I am mapping with my volume I get permissions denied, so the python script is not able to write anymore.
So at the end of my dockerfile this instructions combined with the mapping in the docker run command give me the permission denied:
VOLUME /data_storage
Any suggestions on how to resolve this? some more permissions needed for the "user01"?
Thanks
I was able to resolve my issue by removing the "volume" command from the dockerfile and just doing the mapping at the moment of executing the docker run:
sudo docker run -v data_storage:/home/user01/.client -p 3008:3000 -itd mycontainer

Docker Container port issue: Not able to access tomcat url using host ip

I am new to Docker, I have setup Docker Container on an Amazon Linux box.
I have a docker file which installs tomcat java and a war.
I can see all the installations present in the docker container when I navigate through the container in the exact folders I have mentioned in the Docker file.
When I run the Docker container it says tomcat server has started and I have also tailed the logs so I can see the service is running.
But when I open the host IP URL and 8080 port it says URL can't be reached.
These are the commands to build and run the file which works fine and I can see the status as running.
docker build -t friendly1 .
docker run -p 8080:8080 friendly1
What am I missing here? Request some help on this.
FROM centos:latest
RUN yum -y update && \
yum -y install wget && \
yum -y install tar && \
yum -y install zip unzip
ENV JAVA_HOME /opt/java/jdk1.7.0_67/
ENV CATALINA_HOME /opt/tomcat/apache-tomcat-7.0.70
ENV SAVIYNT_HOME /opt/tomcat/apache-tomcat-7.0.70/webapps
ENV PATH $PATH:$JAVA_HOME/jre/jdk1.7.0_67/bin:$CATALINA_HOME/bin:$CATALINA_HOME/scripts:$CATALINA_HOME/apache-tomcat-7.0.70/bin
ENV JAVA_VERSION 7u67
ENV JAVA_BUILD 7u67
RUN mkdir /opt/java/
RUN wget https://<S3location>/jdk-7u67-linux-x64.gz && \
tar -xvf jdk-7u67-linux-x64.gz && \
#rm jdk*.gz && \
mv jdk* /opt/java/
# Install Tomcat
ENV TOMCAT_MAJOR 7
ENV TOMCAT_VERSION 7.0.70
RUN mkdir /opt/tomcat/
RUN wget https://<s3location>/apache-tomcat-7.0.70.tar.gz && \
tar -xvf apache-tomcat-${TOMCAT_VERSION}.tar.gz && \
#rm apache-tomcat*.tar.gz && \
mv apache-tomcat* /opt/tomcat/
RUN chmod +x ${CATALINA_HOME}/bin/*sh
WORKDIR /opt/tomcat/apache-tomcat-7.0.70/
CMD "startup.sh" && tail -f /opt/tomcat/apache-tomcat-7.0.70/logs/*
EXPOSE 8080

Resources