App Engine Flexible Environment - Dockerfile installing outdated version of GDAL - docker

I am trying to use a Docker image on Google App Engine Flexible Environment.
FROM ubuntu:bionic
MAINTAINER Makina Corpus "contact#makina-corpus.com"
ENV PYTHONUNBUFFERED 1
ENV DEBIAN_FRONTEND noninteractive
ENV LANG C.UTF-8
RUN apt-get update -qq && apt-get install -y -qq \
# std libs
git less nano curl \
ca-certificates \
wget build-essential\
# python basic libs
python3.8 python3.8-dev python3.8-venv gettext \
# geodjango
gdal-bin binutils libproj-dev libgdal-dev \
# postgresql
libpq-dev postgresql-client && \
apt-get clean all && rm -rf /var/apt/lists/* && rm -rf /var/cache/apt/*
# install pip
RUN wget https://bootstrap.pypa.io/get-pip.py && python3.8 get-pip.py && rm get-pip.py
RUN pip3 install --no-cache-dir setuptools wheel -U
CMD ["/bin/bash"]
The docker image appears to build correctly but when the service deploys the application crashes and i get this error message:
File "/Users/NAME/Documents/gcp/google-cloud-sdk/lib/googlecloudsdk/api_lib/app/operations_util.py", line 183, in IsDone
encoding.MessageToPyValue(operation.error)))
OperationError: Error Response: [9]
Application startup error! Code: APP_CONTAINER_CRASHED
ERROR: (gcloud.app.deploy) Error Response: [9]
Application startup error! Code: APP_CONTAINER_CRASHED
This is failing as the Dockerfile is installing a significantly outdated version of the GDAL package which conflicts with the more current python installation.
How do I ensure that the dockerfile has the correct package repository and is installing the right, up to date, versions? Is there some line that I can insert to update the repository, or at least print the repository, before it starts installing?
EDIT:
My app.yaml:
# [START django_app]
runtime: custom
env: flex
entrypoint: gunicorn -b :$PORT MyApplication.wsgi
runtime_config:
python_version: 3
# [END runtime]
handlers:
# This configures Google App Engine to serve the files in the app's static
# directory.
#- url: /static
# static_dir: static/
#- url: /MyApplication/static
# static_dir: MyApplication/static/
# This handler routes all requests not caught above to your main app. It is
# required when static routes are defined, but can be omitted (along with
# the entire handlers section) when there are no static files defined.
- url: /.*
script: auto
# [END django_app]
resources:
cpu: 1
memory_gb: 2
disk_size_gb: 10

You App Engine deployment is failing because it needs a service listening on port 8080 and it cannot run bash on the cloud. If you need to debug your App Engine Flex instance, you need to first get a service on port 8080 and then enable SSH.

Similar issues are being tackled here and here
Your Dockerfile should run a command that spins up your application listening on port 8080:
CMD gunicorn -b :$PORT MyApplication.wsgi
GAE actually spins up containers with docker run and I am not sure why they would also have the entrypoint specified in the app.yaml file. Better not ask too many question with GAE.
Other issues for you to think about as mentioned in some of the comments above:
Wouldn't it be better to use Google's GAE base image (as mentioned in some of the comments above) -> FROM gcr.io/google-appengine/python?
If so, you need to consider it is based off Ubuntu 16.04 and you need to update dependencies (by adding the UbuntuGIS PPA: add-apt-repository -y ppa:ubuntugis/ppa)
How do you install your other dependencies? Running pip using a requirements file?

Related

Module /opt/rejson.so failed to load

ive been creating new intances for redis and clustering them. i would like my intsances to use rejson so im using redislabs/rejson. the only problem is when composing up i get this issue for all redis nodes.
here is the dockerfile for allredis nodes:
FROM redislabs/rejson:latest AS redis
ARG REDIS_PORT
WORKDIR /redis-workdir
# Installing OS level dependencies
RUN apt-get update
RUN apt-get install -y wget
RUN apt-get install -y gettext-base
RUN mkdir -p "/opt"
# Downloading redis default config
RUN wget https://raw.githubusercontent.com/wayofthepie/docker-rejson/master/redis.conf
RUN mv redis.conf redis.default.conf
COPY . .
ENV REDIS_PORT $REDIS_PORT
RUN envsubst < redis.conf > updated_redis.conf
RUN mv updated_redis.conf redis.conf
CMD redis-server ./redis.conf
its running the last line of this link https://raw.githubusercontent.com/wayofthepie/docker-rejson/master/redis.conf
thanks for helping out

Certificate / SSL connection problem in docker build on macOS

I am building a docker image on macOS. This is the part of my Dockerfile:
FROM ubuntu:18.04
# Dir you need to override to keep data on reboot/new container:
VOLUME /var/lib/mysql
#VOLUME /var/www/MISP/Config
# Dir you might want to override in order to have custom ssl certs
# Need: "misp.key" and "misp.crt"
#VOLUME /etc/ssl/private
# 80/443 - MISP web server, 3306 - mysql, 6379 - redis, 50000 - MISP ZeroMQ
EXPOSE 80 443 3306 6379 50000
ENV DEBIAN_FRONTEND noninteractive
ENV DEBIAN_PRIORITY critical
RUN apt-get update && apt-get install -y supervisor cron logrotate syslog-ng-core postfix curl gcc git gnupg-agent make python3 openssl redis-server sudo vim zip wget mariadb-client mariadb-server sqlite3 moreutils apache2 apache2-doc apache2-utils libapache2-mod-php php php-cli php-gnupg php-dev php-json php-mysql php7.2-opcache php-readline php-redis php-xml php-mbstring rng-tools python3-dev python3-pip python3-yara python3-redis python3-zmq libxml2-dev libxslt1-dev zlib1g-dev python3-setuptools libpq5 libjpeg-dev libfuzzy-dev ruby asciidoctor tesseract-ocr imagemagick libpoppler-cpp-dev
RUN mkdir -p /var/www/MISP /root/.config /root/.git
WORKDIR /var/www/MISP
RUN chown -R www-data:www-data /var/www/MISP /root/.config /root/.git
RUN sudo -u www-data -H git clone https://github.com/MISP/MISP.git /var/www/MISP
When I build the image, it returns this error:
Step 16/16 : RUN sudo -u www-data -H git clone https://github.com/MISP/MISP.git /var/www/MISP
---> Running in f0f0b76b353c
Cloning into '/var/www/MISP'...
fatal: unable to access 'https://github.com/MISP/MISP.git/': server certificate verification failed. CAfile: /etc/ssl/certs/ca-certificates.crt CRLfile: none
It can't find the certificate. Even if I add RUN git config --global http.sslverify "false" it passes this error but it returns error in the rest of the Dockerfile.
It means that my Docker for macOS has problem with ssl connections, is there any configuration for ssl certificates or other ssl setting for Docker in mac that I missed?
I have run into the same issue. Exactly the same error message.
What you have to realize here is that the git command you are running is inside the docker environment and has nothing to do with the fact that you are on MacOS, or the git version you have installed in your operating system.
What is happening here is that the plain ubuntu:18.04 image is outdated and doesn't have up to date certificates.
In my case I fixed this by adding the following two statements right after you FROM line:
FROM ubuntu:18.04
# Update the OS
RUN apt-get update && apt-get upgrade -y
In my case it was another base image but the behavior/solution was the same.

How to enable systemd on Dockerfile with Ubuntu18.04

I know Systemd is not recommended on Docker containers but is it possible?
I have staging/prod environments on Ubuntu 18.04 cloud VMs deployed with Ansible;
My current dev environment is a Ubuntu 18.04 Vagrantfile that uses the same Ansible playbook.yml of staging/prod
Now I'm trying to replace the Vagrantfile with a Dockerfile for development but the Ansible playbook.yml fails when applying systemd modules. I would like to have systemd on my dev environment as well so that I can test changes on my playbook.yml local. Any idea how I can do it?
If I try to build with Dockerfile and playbook.yml as below, I get an error Failed to find required executable systemctl in paths.
If I add RUN apt-get install systemd to Dockerfile nd try to build, I get an error System has not been booted with systemd as init system
Sample Dockerfile:
FROM ubuntu:18.04
ADD . /app
WORKDIR /app
# Install Python3 pip used to install Ansible
RUN apt-get update && apt-get install -y \
python3-pip \
# Install Ansible
RUN pip3 install --trusted-host pypi.python.org ansible
RUN ansible-playbook playbook.yml -i inventory
EXPOSE 80
Sample playbook.yml:
---
- name: Ansible playbook to setup dev environment
hosts: all
vars:
ansible_python_interpreter: "/usr/bin/python3"
debug: True
become: yes
become_method: sudo
tasks:
- name: Copy App Gunicorn systemd config
template:
src: app_gunicorn.service
dest: /etc/systemd/system/
- name: Enable App Gunicorn on systemd
systemd: state=started name=app_gunicorn
Sample inventory:
docker-dev ansible_host=localhost ansible_connection=local
That's a perfect example where the docker-systemctl-replacement script should be used.
It has been developed to allow ansible scripts to target both virtual machines and docker containers. You do not need to enable a real systemd, just overwrite /usr/bin/systemctl in operating systems that are otherwise under systemd control. The docker container will then look good enough for ansible, whereas I am more used to use the general 'service:' module instead of the specific 'systemd:' module.
If its an option you can also start from a docker image with systemdalready enabled as this one available for ubuntu 18.04, and see also here.
Here is an example dockerfile where we start from this image and install python3.8 for our app needs:
FROM jrei/systemd-ubuntu
# INSTALL PYTHON
RUN apt-get update -q -y
RUN apt-get install -q -y python3.8 python3-distutils curl libpq-dev build-essential python3.8-dev
RUN rm /usr/bin/python3
RUN ln -s /usr/bin/python3.8 /usr/bin/python3
RUN curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py
RUN python3.8 get-pip.py
RUN pip3.8 install --upgrade pip
RUN pip3.8 install -q -r requirements.txt
RUN update-alternatives --install /usr/bin/python python /usr/bin/python3 10
ENV PYTHONPATH "${PYTHONPATH}:."
### then setting the app needs and entrypoint

Gcloud meanjs build failing via docker install

I'm trying to deploy the following container on google cloud app engine using gcloud app deploy, it's the meanjs.org vanilla image. It uses a dockerfile, I'm new to docker and I'm trying to learn it on the fly, so if anyone can help that'd be great, thanks.
It looks as if the install of node via the dockerfile fails, I've checked node's documentation on github, and nothing has changed syntactically to what is in the existing dockerfile. I will attempt to recreate on my local workstation this morning, and will update this query shortly.
the errors are as follows..first docker error second errorbuild fail error
The docker file..
# Build:
# docker build -t meanjs/mean .
#
# Run:
# docker run -it meanjs/mean
#
# Compose:
# docker-compose up -d
FROM ubuntu:latest
MAINTAINER MEAN.JS
# 80 = HTTP, 443 = HTTPS, 3000 = MEAN.JS server, 35729 = livereload, 8080 = node-inspector
EXPOSE 80 443 3000 35729 8080
# Set development environment as default
ENV NODE_ENV development
# Install Utilities
RUN apt-get update -q \
&& apt-get install -yqq \
curl \
git \
ssh \
gcc \
make \
build-essential \
libkrb5-dev \
sudo \
apt-utils \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
# Install nodejs
RUN curl -sL https://deb.nodesource.com/setup_6.x | sudo -E bash -
RUN sudo apt-get install -yq nodejs \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
# Install MEAN.JS Prerequisites
RUN npm install --quiet -g gulp bower yo mocha karma-cli pm2 && npm cache clean
RUN mkdir -p /opt/mean.js/public/lib
WORKDIR /opt/mean.js
# Copies the local package.json file to the container
# and utilities docker container cache to not needing to rebuild
# and install node_modules/ everytime we build the docker, but only
# when the local package.json file changes.
# Install npm packages
COPY package.json /opt/mean.js/package.json
RUN npm install --quiet && npm cache clean
# Install bower packages
COPY bower.json /opt/mean.js/bower.json
COPY .bowerrc /opt/mean.js/.bowerrc
RUN bower install --quiet --allow-root --config.interactive=false
COPY . /opt/mean.js
# Run MEAN.JS server
CMD npm install && npm start
Okay, so after much wrestling unsuccessfully trying to install docker on windows, I went back to the dockerfile to try and identify the core issue here. Fortunately I find a solution as follows..
NodeJS is attempting to install on Ubuntu.
In the dockerfile at the root of the app
Ubuntu version is configured as:
FROM ubuntu:latest
simply change it to:
FROM ubuntu:14.04
I'm not sure if this is the best version to use for the build but it seems to be running successfully. Please feel free to amend/recommend an alternative solution. I'm new to Docker so pls be kind.

Can't build openjdk:8-jdk image directly

I'm slowly making my way through the Riot Taking Control of your Docker Image tutorial http://engineering.riotgames.com/news/taking-control-your-docker-image. This tutorial is a little old, so there are some definite changes to how the end file looks. After hitting several walls I decided to work in the opposite order of the tutorial. I successfully folded the official jenkinsci image into my personal Dockerfile, starting with FROM: openjdk:8-dk. But when I try to fold in the openjdk:8-dk file into my personal image I receive the following error
E: Version '8u102-b14.1-1~bpo8+1' for 'openjdk-8-jdk' was not found
ERROR: Service 'jenkinsmaster' failed to build: The command '/bin/sh
-c set -x && apt-get update && apt-get install -y openjdk-8-jdk="$JAVA_DEBIAN_VERSION"
ca-certificates-java="$CA_CERTIFICATES_JAVA_VERSION" && rm -rf
/var/lib/apt/lists/* && [ "$JAVA_HOME" = "$(docker-java-home)" ]'
returned a non-zero code: 100 Cosettes-MacBook-Pro:docker-test
Cosette$
I'm receiving this error even when I gave up and directly copied and pasted the openjdk:8-jdk Dockerfile into my own. My end goal is to bring my personal Dockerfile down to the point that it starts FROM debian-jessie. Any help would be appreciated.
My Dockerfile:
FROM buildpack-deps:jessie-scm
# A few problems with compiling Java from source:
# 1. Oracle. Licensing prevents us from redistributing the official JDK.
# 2. Compiling OpenJDK also requires the JDK to be installed, and it gets
# really hairy.
RUN apt-get update && apt-get install -y --no-install-recommends \
bzip2 \
unzip \
xz-utils \
&& rm -rf /var/lib/apt/lists/*
RUN echo 'deb http://deb.debian.org/debian jessie-backports main' > /etc/apt/sources.list.d/jessie-backports.list
# Default to UTF-8 file.encoding
ENV LANG C.UTF-8
# add a simple script that can auto-detect the appropriate JAVA_HOME value
# based on whether the JDK or only the JRE is installed
RUN { \
echo '#!/bin/sh'; \
echo 'set -e'; \
echo; \
echo 'dirname "$(dirname "$(readlink -f "$(which javac || which java)")")"'; \
} > /usr/local/bin/docker-java-home \
&& chmod +x /usr/local/bin/docker-java-home
ENV JAVA_HOME /usr/lib/jvm/java-8-openjdk-amd64
ENV JAVA_VERSION 8u102
ENV JAVA_DEBIAN_VERSION 8u102-b14.1-1~bpo8+1
# see https://bugs.debian.org/775775
# and https://github.com/docker-library/java/issues/19#issuecomment-70546872
ENV CA_CERTIFICATES_JAVA_VERSION 20140324
RUN set -x \
&& apt-get update \
&& apt-get install -y \
openjdk-8-jdk="$JAVA_DEBIAN_VERSION" \
ca-certificates-java="$CA_CERTIFICATES_JAVA_VERSION" \
&& rm -rf /var/lib/apt/lists/* \
&& [ "$JAVA_HOME" = "$(docker-java-home)" ]
# see CA_CERTIFICATES_JAVA_VERSION notes above
RUN /var/lib/dpkg/info/ca-certificates-java.postinst configure
# Jenkins Specifics
# install Tini
ENV TINI_VERSION 0.9.0
ENV TINI_SHA fa23d1e20732501c3bb8eeeca423c89ac80ed452
# Use tini as subreaper in Docker container to adopt zombie processes
RUN curl -fsSL https://github.com/krallin/tini/releases/download/v${TINI_VERSION}/tini-static -o /bin/tini && chmod +x /bin/tini \
&& echo "$TINI_SHA /bin/tini" | sha1sum -c -
# Set Jenkins Environmental Variables
ENV JENKINS_HOME /var/jenkins_home
ENV JENKINS_SLAVE_AGENT_PORT 50000
# jenkins version being bundled in this docker image
ARG JENKINS_VERSION
ENV JENKINS_VERSION ${JENKINS_VERSION:-2.19.1}
# jenkins.war checksum, download will be validated using it
ARG JENKINS_SHA=dc28b91e553c1cd42cc30bd75d0f651671e6de0b
ENV JENKINS_UC https://updates.jenkins.io
ENV COPY_REFERENCE_FILE_LOG $JENKINS_HOME/copy_reference_file.log
ENV JAVA_OPTS="-Xmx8192m"
ENV JENKINS_OPTS="--handlerCountMax=300 --logfile=/var/log/jenkins/jenkins.log --webroot=/var/cache/jenkins/war"
# Can be used to customize where jenkins.war get downloaded from
ARG JENKINS_URL=http://repo.jenkins-ci.org/public/org/jenkins-ci/main/jenkins-war/${JENKINS_VERSION}/jenkins-war-${JENKINS_VERSION}.war
ARG user=jenkins
ARG group=jenkins
ARG uid=1000
ARG gid=1000
# Jenkins is run with user `jenkins`, uid = 1000. If you bind mount a volume from the host or a data
# container, ensure you use the same uid.
RUN groupadd -g ${gid} ${group} \
&& useradd -d "$JENKINS_HOME" -u ${uid} -g ${gid} -m -s /bin/bash ${user}
# Jenkins home directory is a volume, so configuration and build history
# can be persisted and survive image upgrades
VOLUME /var/jenkins_home
# `/usr/share/jenkins/ref/` contains all reference configuration we want
# to set on a fresh new installation. Use it to bundle additional plugins
# or config file with your custom jenkins Docker image.
RUN mkdir -p /usr/share/jenkins/ref/init.groovy.d
# Install Jenkins. Could use ADD but this one does not check Last-Modified header neither does it
# allow to control checksum. see https://github.com/docker/docker/issues/8331
RUN curl -fsSL ${JENKINS_URL} -o /usr/share/jenkins/jenkins.war \
&& echo "${JENKINS_SHA} /usr/share/jenkins/jenkins.war" | sha1sum -c -
# Prep Jenkins Directories
USER root
RUN chown -R ${user} "$JENKINS_HOME" /usr/share/jenkins/ref
RUN mkdir /var/log/jenkins
RUN mkdir /var/cache/jenkins
RUN chown -R ${group}:${user} /var/log/jenkins
RUN chown -R ${group}:${user} /var/cache/jenkins
# Expose ports for web (8080) & node (50000) agents
EXPOSE 8080
EXPOSE 50000
# Copy in local config filesfiles
COPY init.groovy /usr/share/jenkins/ref/init.groovy.d/tcp-slave-agent-port.groovy
COPY jenkins-support /usr/local/bin/jenkins-support
COPY jenkins.sh /usr/local/bin/jenkins.sh
# NOTE : Just set pluginID to download latest version of plugin.
# NOTE : All plugins need to be listed as there is no transitive dependency resolution.
# from a derived Dockerfile, can use `RUN plugins.sh active.txt` to setup
# /usr/share/jenkins/ref/plugins from a support bundle
COPY plugins.sh /usr/local/bin/plugins.sh
RUN chmod +x /usr/local/bin/plugins.sh
RUN chmod +x /usr/local/bin/jenkins.sh
# Switch to the jenkins user
USER ${user}
# Tini as the entry point to manage zombie processes
ENTRYPOINT ["/bin/tini", "--", "/usr/local/bin/jenkins.sh"]
Try a JAVA_DEBIAN_VERSION of 8u111-b14-2~bpo8+1
Here's what happens: when you build the docker file, docker tries to execute all the lines in the dockerfile. One of those is this apt command: apt-get install -y openjdk-8-jdk="$JAVA_DEBIAN_VERSION". This comand says "Install OpenJDK version $JAVA_DEBIAN_VERSION, exactly. Nothing else.". This version is no longer available in Debian repositories, so it can't be apt-get installed! I believe this happens with all packages in official mirrors: if a new version of the package is released, the older version is no longer around to be installed.
If you want to access older Debian packages, you can use something like http://snapshot.debian.org/. The older OpenJDK package has known security vulnerabilities. I recommend using the latest version.
You can use the latest version by leaving out the explicit version in the apt-get command. On the other hand, this will make your image less reproducible: building the image today may get you u111, building it tomorrow may get you u112.
As for why the instructions worked in the other Dockerfile, I think the reason is that at the time the other Dockerfile was built, the package was available. So docker could apt-get install it. Docker then built the image containing the (older) OpenJDK. That image is a binary, so you can install it, or use it in FROM without any issues. But you can't reproduce the image: if you were to try and build the same image yourself, you would run into the same errors.
This also brings up an issue about security updates: since docker images are effectively static binaries (built once, bundle in all dependencies), they don't get security updates once built. You need to keep track of any security updates affecting your docker images and rebuild any affected docker images.

Resources