Docker image runs on heroku server but not locally - docker

I have a docker image that runs as expected from Heroku, but breaks when I try to run the image locally. When I run the image locally, I am not able to stop the image or open the Docker CLI. When I repeatedly hit stop, the container exits with a code of 137. I run the command docker build -t generate_test_service. from the project directory and simply press play from the Docker application.
Dockerfile
# start by pulling the python image
FROM alpine:latest
# copy the requirements file into the image
COPY ./requirements.txt /app/requirements.txt
# switch working directory
WORKDIR /app
# install the dependencies and packages in the requirements file
RUN apk update
RUN apk add py-pip
RUN apk add --no-cache python3-dev
ENV VIRTUAL_ENV=./venv
RUN python3 -m venv $VIRTUAL_ENV
ENV PATH="$VIRTUAL_ENV/bin:$PATH"
RUN pip install --upgrade pip
RUN apk add --update gcc libc-dev linux-headers && rm -rf /var/cache/apk/*
RUN pip install psutil
RUN pip --no-cache-dir install -r requirements.txt
# copy every content from the local file to the image
COPY . /app
# configure the container to run in an executed manner
ENTRYPOINT [ "python3" ]
CMD [ "app.py" ]
EXPOSE 4000
app.py
from flask import Flask
from flask import jsonify
import os
app = Flask(__name__)
if __name__ == '__main__':
port = os.environ.get("PORT", 5000)
app.run(debug=False, host='0.0.0.0', port=port)

Related

Docker RUN command near end of Dockerfile ... boots into container unless I give a CMD at the end but doesn't work either way. Any ideas?

I am working on a Dockerfile to be used with Google Cloud Run.
I'm not getting the command to run.
Here's the (slightly obfuscated) Dockerfile:
FROM gcr.io/google.com/cloudsdktool/google-cloud-cli:latest
RUN apt-get update
RUN pip install --upgrade pip
COPY requirements.txt /root/
RUN pip install -r /root/requirements.txt
RUN useradd -m ubuntu
ENV HOME=/home/ubuntu
USER ubuntu
COPY --chown=ubuntu:ubuntu . /home/ubuntu
WORKDIR /home/ubuntu
RUN gcloud config set project our-customer-tech-sem-prod
RUN gcloud auth activate-service-account --key-file=./service/our-customer-tech-sem-prod-a02b2c7f4536.json
RUN gcloud compute config-ssh
ENV GOOGLE_APPLICATION_CREDENTIALS=./service/our-customer-tech-sem-prod-a02b2c7f4536.json
CMD ["gcloud", "compute", "ssh", "--internal-ip", "our-persist-cluster-py3-prod", "--zone=us-central1-b", "--project", "our-customer-tech-sem-prod", "--", "'ps -ef'", "|", "./checker2.py"]
This tries to run the CMD at the end, but says it can't find the host specified. (Runs fine from the command line outside Docker.)
There were a couple of things wrong at the end (1) typo in the host name (fixed with the help of a colleague) ... then I had to make the CMD into a shell script to get the pipe inside to work correctly.
Here's the final (slightly obfuscated) script:
FROM gcr.io/google.com/cloudsdktool/google-cloud-cli:latest
RUN apt-get update
RUN pip install --upgrade pip
COPY requirements.txt /root/
RUN pip install -r /root/requirements.txt
RUN useradd -m ubuntu
RUN mkdir /secrets
COPY secrets/* /secrets
ENV HOME=/home/ubuntu
USER ubuntu
COPY --chown=ubuntu:ubuntu . /home/ubuntu
WORKDIR /home/ubuntu
RUN gcloud config set project our-customer-tech-sem-prod
RUN gcloud auth activate-service-account --key-file=./service/our-customer-tech-sem-prod-a02b2c7f4536.json
RUN gcloud compute config-ssh
ENV GOOGLE_APPLICATION_CREDENTIALS=./service/our-customer-tech-sem-prod-a02b2c7f4536.json
CMD ["./rungcloud.sh"]

Docker multistage build Issues

My Current Docker file looks like below . I'm trying to use h2o as a base for my ML model service .Now h2o requires JRE and i'm forced to install the required packages for my flask script . It was as heavy as 1.8 Gig so attempted multi-stage build (script below )
#Original Docker File
FROM h2oai/h2o-open-source-k8s
MAINTAINER rajesh.r6r#gmail.com
USER root
WORKDIR /app
ADD . /app
RUN set -xe \
&& apt-get update -y \
&& apt-get install python-pip -y \
&& rm -rf /var/lib/apt/lists/* # remove the cached files
RUN pip install --upgrade pip
RUN pip install --trusted-host pypi.python.org -r requirements.txt
EXPOSE 5005
EXPOSE 54321
ENV NAME World
CMD ["python", "app.py"]
I attempted doing multi-stage builds as follows , but this only results in a python image skipping the h2o part . What am I Missing ?
#Multi-Stage Docker File
FROM h2oai/h2o-open-source-k8s AS baseimage
FROM python:3.7-slim
USER root
WORKDIR /app
ADD . /app
RUN pip install --trusted-host pypi.python.org -r requirements.txt
EXPOSE 5005
EXPOSE 54321
ENV NAME World
CMD ["python", "app.py"]

Monolith docker application with webpack

I am running my monolith application in a docker container and k8s on GKE.
The application contains python & node dependencies also webpack for front end bundle.
We have implemented CI/CD which is taking around 5-6 min to build & deploy new version to k8s cluster.
Main goal is to reduce the build time as much possible. Written Dockerfile is multi stage.
Webpack is taking more time to generate the bundle.To buid docker image i am using already high config worker.
To reduce time i tried using the Kaniko builder.
Issue :
As docker cache layers for python code it's working perfectly. But when there is any changes in JS or CSS file we have to generate bundle.
When there is any changes in JS & CSS file instead if generate new bundle its use caching layer.
Is there any way to separate out build new bundle or use cache by passing some value to docker file.
Here is my docker file :
FROM python:3.5 AS python-build
WORKDIR /app
COPY requirements.txt ./
RUN pip install -r requirements.txt &&\
pip3 install Flask-JWT-Extended==3.20.0
ADD . /app
FROM node:10-alpine AS node-build
WORKDIR /app
COPY --from=python-build ./app/app/static/package.json app/static/
COPY --from=python-build ./app ./
WORKDIR /app/app/static
RUN npm cache verify && npm install && npm install -g --unsafe-perm node-sass && npm run sass && npm run build
FROM python:3.5-slim
COPY --from=python-build /root/.cache /root/.cache
WORKDIR /app
COPY --from=node-build ./app ./
RUN apt-get update -yq \
&& apt-get install curl -yq \
&& pip install -r requirements.txt
EXPOSE 9595
CMD python3 run.py
I would suggest to create separate build pipelines for your docker images, where you know that the requirements for npm and pip aren't so frequent.
This will incredibly improve the speed, reducing the time of access to npm and pip registries.
Use a private docker registry (the official one or something like VMWare harbor or SonaType Nexus OSS).
You store those build images on your registry and use them whenever something on the project changes.
Something like this:
First Docker Builder // python-builder:YOUR_TAG [gitrev, date, etc.)
docker build --no-cache -t python-builder:YOUR_TAG -f Dockerfile.python.build .
FROM python:3.5
WORKDIR /app
COPY requirements.txt ./
RUN pip install -r requirements.txt &&\
pip3 install Flask-JWT-Extended==3.20.0
Second Docker Builder // js-builder:YOUR_TAG [gitrev, date, etc.)
docker build --no-cache -t js-builder:YOUR_TAG -f Dockerfile.js.build .
FROM node:10-alpine
WORKDIR /app
COPY app/static/package.json /app/app/static
WORKDIR /app/app/static
RUN npm cache verify && npm install && npm install -g --unsafe-perm node-sass
Your Application Multi-stage build:
docker build --no-cache -t app_delivery:YOUR_TAG -f Dockerfile.app .
FROM python-builder:YOUR_TAG as python-build
# Nothing, already "stoned" in another build process
FROM js-builder:YOUR_TAG AS node-build
ADD ##### YOUR JS/CSS files only here, required from npm! ###
RUN npm run sass && npm run build
FROM python:3.5-slim
COPY . /app # your original clean app
COPY --from=python-build #### only the files installed with the pip command
WORKDIR /app
COPY --from=node-build ##### Only the generated files from npm here! ###
RUN apt-get update -yq \
&& apt-get install curl -yq \
&& pip install -r requirements.txt
EXPOSE 9595
CMD python3 run.py
A question is: why do you install curl and execute again the pip install -r requirements.txt command in the final docker image?
Triggering every time an apt-get update and install without cleaning the apt cache /var/cache/apt folder produces a bigger image.
As suggestion, use the docker build command with the option --no-cache to avoid caching result:
docker build --no-cache -t your_image:your_tag -f your_dockerfile .
Remarks:
You'll have 3 separate Dockerfiles, as I listed above.
Build the Docker images 1 and 2 only if you change your python-pip and node-npm requirements, otherwise keep them fixed for your project.
If any dependency requirement changes, then update the docker image involved and then the multistage one to point to the latest built image.
You should always build only the source code of your project (CSS, JS, python). In this way, you have also guaranteed reproducible builds.
To optimize your environment and copy files across the multi-stage builders, try to use virtualenv for python build.

How to copy a folder from docker to host while configuring custom image

I try to create my own Docker image. After it runs, as the result, the archive folder is created in the container. And I need automatically copy that archive folder to my host.
What is important I have to configure this process before creating an image in my Docker file.
Below you can see what is already in my Docker file:
FROM python:3.6.5-slim
RUN apt-get update && apt-get install -y gcc && apt-get autoclean -y
WORKDIR /my-tests
COPY jobfile.py testbed.yaml requirements.txt rabbit.py ./
RUN pip install --upgrade pip wheel setuptools && \
pip install --no-cache-dir -r requirements.txt && \
rm -v requirements.txt
VOLUME /my-tests/archive
ENV TINI_VERSION v0.18.0
ADD https://github.com/krallin/tini/releases/download/${TINI_VERSION}/tini /tini
RUN chmod +x /tini
ENTRYPOINT ["/tini","--"]
CMD ["easypy", "jobfile.py","-testbed_file","testbed.yaml"]
While running the container, map any folder on the host to the archive folder of the container, using -v /usr/host_folder:/my-tests/archive . Any thing which is created inside the container at /my-tests/archive will now be available at /usr/host_folder on the host.
Or use the following command to copy the files using scp. You can create a script which first runs the container, then runs the docker exec command.
docker exec -it <container-name> scp /my-tests/archive <host-ip>:/host_path -r

Docker: Permissions error on OpenShift Origin 3.6

I'm trying to run a pretty simple Flask API in OpenShift Origin 3.6. I can build this container just fine locally and remotely, but when I go to deploy on OpenShift, I get permissions errors on the RUN chmod -R 777 ... lines. Has anyone found a way around this? I wonder if my corporate environment doesn't allow this type of copying, but it's all within a container...
Edit: providing a completely minimal example
Directory structure:
project
├── Dockerfile
└── app
└── api.py
Dockerfile to build base image:
FROM docker.io/ubuntu:16.04
RUN apt-get update && apt-get install -y --no-install-recommends \
cmake curl git make gunicorn nginx python3 python3-pip python3-setuptools build-essential \
&& apt-get clean && rm -rf /var/lib/apt/lists/*
RUN pip3 install --upgrade pip
RUN pip3 install pandas numpy scikit-learn scipy xgboost flask-restful nltk gunicorn
RUN mkdir -p /home/app
WORKDIR /
RUN python3 -c 'import nltk; nltk.download("punkt")'
RUN mv /root/nltk_data /home/app
I then run docker build . -t project:latest --no-cache. Next, the Dockerfile that uses the base image from above to deploy the actual application (I basically just comment out the "base image" lines from above and uncomment these ones from below, using the same Dockerfile file):
FROM project:latest
COPY app /home/app
RUN chmod -R 777 /home/app
WORKDIR /home/app
EXPOSE 5000
CMD ["python3", "api.py"]
I build the container to be deployed using docker build . -t project:app --no-cache.
api.py:
import time
if __name__ == '__main__':
while True:
print('This app is running')
time.sleep(30)

Resources