How to enable systemd on Dockerfile with Ubuntu18.04 - docker

I know Systemd is not recommended on Docker containers but is it possible?
I have staging/prod environments on Ubuntu 18.04 cloud VMs deployed with Ansible;
My current dev environment is a Ubuntu 18.04 Vagrantfile that uses the same Ansible playbook.yml of staging/prod
Now I'm trying to replace the Vagrantfile with a Dockerfile for development but the Ansible playbook.yml fails when applying systemd modules. I would like to have systemd on my dev environment as well so that I can test changes on my playbook.yml local. Any idea how I can do it?
If I try to build with Dockerfile and playbook.yml as below, I get an error Failed to find required executable systemctl in paths.
If I add RUN apt-get install systemd to Dockerfile nd try to build, I get an error System has not been booted with systemd as init system
Sample Dockerfile:
FROM ubuntu:18.04
ADD . /app
WORKDIR /app
# Install Python3 pip used to install Ansible
RUN apt-get update && apt-get install -y \
python3-pip \
# Install Ansible
RUN pip3 install --trusted-host pypi.python.org ansible
RUN ansible-playbook playbook.yml -i inventory
EXPOSE 80
Sample playbook.yml:
---
- name: Ansible playbook to setup dev environment
hosts: all
vars:
ansible_python_interpreter: "/usr/bin/python3"
debug: True
become: yes
become_method: sudo
tasks:
- name: Copy App Gunicorn systemd config
template:
src: app_gunicorn.service
dest: /etc/systemd/system/
- name: Enable App Gunicorn on systemd
systemd: state=started name=app_gunicorn
Sample inventory:
docker-dev ansible_host=localhost ansible_connection=local

That's a perfect example where the docker-systemctl-replacement script should be used.
It has been developed to allow ansible scripts to target both virtual machines and docker containers. You do not need to enable a real systemd, just overwrite /usr/bin/systemctl in operating systems that are otherwise under systemd control. The docker container will then look good enough for ansible, whereas I am more used to use the general 'service:' module instead of the specific 'systemd:' module.

If its an option you can also start from a docker image with systemdalready enabled as this one available for ubuntu 18.04, and see also here.
Here is an example dockerfile where we start from this image and install python3.8 for our app needs:
FROM jrei/systemd-ubuntu
# INSTALL PYTHON
RUN apt-get update -q -y
RUN apt-get install -q -y python3.8 python3-distutils curl libpq-dev build-essential python3.8-dev
RUN rm /usr/bin/python3
RUN ln -s /usr/bin/python3.8 /usr/bin/python3
RUN curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py
RUN python3.8 get-pip.py
RUN pip3.8 install --upgrade pip
RUN pip3.8 install -q -r requirements.txt
RUN update-alternatives --install /usr/bin/python python /usr/bin/python3 10
ENV PYTHONPATH "${PYTHONPATH}:."
### then setting the app needs and entrypoint

Related

Installing Kubernetes in Docker container

I want to use Kubeflow to check it out and see if it fits my projects. I want to deploy it locally as a development server so I can check it out, but I have Windows on my computer and Kubeflow only works on Linux. I'm not allowed to dual boot this computer, I could install a virtual machine, but I thought it would be easier to use docker, and oh boy was I wrong. So, the problem is, I want to install Kubernetes in a docker container, right now this is the Dockerfile I've written:
# Docker file with local deployment of Kubeflow
FROM ubuntu:18.04
ENV USER=Joao
ENV PASSWORD=Password
ENV WK_DIR=/home/${USER}
# Setup Ubuntu
RUN apt-get update -y
RUN apt-get install -y conntrack sudo wget
RUN useradd -rm -d /home/${USER} -s /bin/bash -g root -G sudo -u 1001 -p ${PASSWORD} ${USER}
WORKDIR ${WK_DIR}
# Installing Docker CE
RUN apt-get install -y apt-transport-https ca-certificates curl software-properties-common
RUN curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
RUN add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable"
RUN apt-get update -y
RUN apt-get install -y docker-ce docker-ce-cli containerd.io
# Installing Kubectl
RUN curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.15.0/bin/linux/amd64/kubectl
RUN chmod +x ./kubectl
RUN mv ./kubectl /usr/local/bin/kubectl
# Installing Minikube
RUN curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
RUN install minikube-linux-amd64 /usr/local/bin/minikube
ENV PATH="${PATH}:${WK_DIR}"
COPY start.sh start.sh
CMD sh start.sh
With this, just to make the deployment easier, I also have a docker-compose.yaml that looks like this:
services:
kf-local:
build: .
volumes:
- path/to/folder:/usr/kubeflow
privileged: true
And start.sh looks like this:
service docker start
minikube start \
--extra-config=apiserver.service-account-issuer=api \
--extra-config=apiserver.service-account-signing-key-file=/var/lib/minikube/certs/apiserver.key \
--extra-config=apiserver.service-account-api-audiences=api \
--driver=docker
The problem is, whenever I try running this I get the error:
X Exiting due to DRV_AS_ROOT: The "docker" driver should not be used with root privileges.
I've tried creating a user and running it from there also but then I'm not being able to run sudo, any idea how I could install Kubernetes on a Docker container?
As you thought you are right in case of using VM and that be easy to test it out.
Instead of setting up Kubernetes on docker you can use Linux base container for development testing.
There is linux container available name as LXC container. Docker is kind of application container while in simple words LXC is like VM for local development testing. you can install the stuff into rather than docker setting up application inside image.
read some details about lxc : https://medium.com/#harsh.manvar111/lxc-vs-docker-lxc-101-bd49db95933a
you can also run it on windows and try it out at : https://linuxcontainers.org/
If you have read the documentation of Kubeflow there is also one option multipass
Multipass creates a Linux virtual machine on Windows, Mac or Linux
systems. The VM contains a complete Ubuntu operating system which can
then be used to deploy Kubernetes and Kubeflow.
Learn more about Multipass : https://multipass.run/#install
Insufficient user permissions on the docker groups and minikube directory cause this error ("X Exiting due to DRV_AS_ROOT: The "docker" driver should not be used with root privileges.").
You can fix that error by adding your user to the docker group and setting permissions to the minikube profile directory (change the $USER with your username in the two commands below):
sudo usermod -aG docker $USER && newgrp docker
sudo chown -R $USER $HOME/.minikube; chmod -R u+wrx $HOME/.minikube

How do I access the frontend of Jenkins running in a Ubuntu Docker container?

I'm setting up Jenkins on my local machine to get it ready for a production deployer environment. I need to make sure that my setup steps mirror the production setup. Production is running Ubuntu 16.04, while my local machine is running macOS Catalina.
To make sure I can walk through setup as will be necessary on production, I'm using Docker to run the same OS as prod and installing Jenkins in that container.
I have installed Jenkins in the Docker container (which is FROM ubuntu:16.04). I'm unsure of the next steps though. How to I expose the Jenkins frontend so that I may access it in my browser?
This may not be necessary to answer the question, but here's by Dockerfile:
FROM ubuntu:16.04
RUN apt-get update
# Install Jenkins dependencies and Jenkins
RUN apt-get install -y wget sudo vim apt-transport-https ca-certificates apt-utils
RUN wget -q -O - https://pkg.jenkins.io/debian-stable/jenkins.io.key | sudo apt-key add -
RUN sh -c 'echo deb https://pkg.jenkins.io/debian-stable binary/ > \
/etc/apt/sources.list.d/jenkins.list'
RUN apt-get update
RUN apt-get install -y jenkins
# Install Java
RUN apt-get -o Dpkg::Options::="--force-overwrite" install -y openjdk-8-jdk
# add the java binaries to jenkins PATH
RUN sed -i "s|PATH=/bin:/usr/bin:/sbin:/usr/sbin|PATH=/bin:/usr/bin:/sbin:/usr/sbin:/usr/lib/jvm/java-8-openjdk-amd64/bin|g" \
/etc/init.d/jenkins
After building that, I exec in and run service jenkins start to start Jenkins.
Newbie to Docker, thanks for the help!
To get access to jenkins web interface you need to expose it's default port (8080) while running container with jenkins master.
For example:
docker run -dit -p 8080:8080 your_jenkins_image

App Engine Flexible Environment - Dockerfile installing outdated version of GDAL

I am trying to use a Docker image on Google App Engine Flexible Environment.
FROM ubuntu:bionic
MAINTAINER Makina Corpus "contact#makina-corpus.com"
ENV PYTHONUNBUFFERED 1
ENV DEBIAN_FRONTEND noninteractive
ENV LANG C.UTF-8
RUN apt-get update -qq && apt-get install -y -qq \
# std libs
git less nano curl \
ca-certificates \
wget build-essential\
# python basic libs
python3.8 python3.8-dev python3.8-venv gettext \
# geodjango
gdal-bin binutils libproj-dev libgdal-dev \
# postgresql
libpq-dev postgresql-client && \
apt-get clean all && rm -rf /var/apt/lists/* && rm -rf /var/cache/apt/*
# install pip
RUN wget https://bootstrap.pypa.io/get-pip.py && python3.8 get-pip.py && rm get-pip.py
RUN pip3 install --no-cache-dir setuptools wheel -U
CMD ["/bin/bash"]
The docker image appears to build correctly but when the service deploys the application crashes and i get this error message:
File "/Users/NAME/Documents/gcp/google-cloud-sdk/lib/googlecloudsdk/api_lib/app/operations_util.py", line 183, in IsDone
encoding.MessageToPyValue(operation.error)))
OperationError: Error Response: [9]
Application startup error! Code: APP_CONTAINER_CRASHED
ERROR: (gcloud.app.deploy) Error Response: [9]
Application startup error! Code: APP_CONTAINER_CRASHED
This is failing as the Dockerfile is installing a significantly outdated version of the GDAL package which conflicts with the more current python installation.
How do I ensure that the dockerfile has the correct package repository and is installing the right, up to date, versions? Is there some line that I can insert to update the repository, or at least print the repository, before it starts installing?
EDIT:
My app.yaml:
# [START django_app]
runtime: custom
env: flex
entrypoint: gunicorn -b :$PORT MyApplication.wsgi
runtime_config:
python_version: 3
# [END runtime]
handlers:
# This configures Google App Engine to serve the files in the app's static
# directory.
#- url: /static
# static_dir: static/
#- url: /MyApplication/static
# static_dir: MyApplication/static/
# This handler routes all requests not caught above to your main app. It is
# required when static routes are defined, but can be omitted (along with
# the entire handlers section) when there are no static files defined.
- url: /.*
script: auto
# [END django_app]
resources:
cpu: 1
memory_gb: 2
disk_size_gb: 10
You App Engine deployment is failing because it needs a service listening on port 8080 and it cannot run bash on the cloud. If you need to debug your App Engine Flex instance, you need to first get a service on port 8080 and then enable SSH.
Similar issues are being tackled here and here
Your Dockerfile should run a command that spins up your application listening on port 8080:
CMD gunicorn -b :$PORT MyApplication.wsgi
GAE actually spins up containers with docker run and I am not sure why they would also have the entrypoint specified in the app.yaml file. Better not ask too many question with GAE.
Other issues for you to think about as mentioned in some of the comments above:
Wouldn't it be better to use Google's GAE base image (as mentioned in some of the comments above) -> FROM gcr.io/google-appengine/python?
If so, you need to consider it is based off Ubuntu 16.04 and you need to update dependencies (by adding the UbuntuGIS PPA: add-apt-repository -y ppa:ubuntugis/ppa)
How do you install your other dependencies? Running pip using a requirements file?

Jenkins not starting in docker (Dockerfile included)

I am attempting to build a simple app with Jenkins in a docker container. I have the following Dockerfile:
FROM ubuntu:trusty
# Install dependencies for Flask app.
RUN sudo apt-get update
RUN sudo apt-get install -y vim
RUN sudo apt-get install -y curl
RUN sudo apt-get install -y python3-pip
RUN pip3 install flask
# Install dependencies for Jenkins (Java).
# Install Java 1.8.
RUN sudo apt-get install -y python-software-properties debconf-utils
RUN sudo apt-get install -y software-properties-common
RUN sudo add-apt-repository -y ppa:webupd8team/java
RUN sudo apt-get update
RUN echo "oracle-java8-installer shared/accepted-oracle-license-v1-1 select true" | sudo debconf-set-selections
RUN sudo apt-get install -y oracle-java8-installer
# Install, start Jenkins.
RUN sudo apt-get install -y wget
RUN wget -q -O - https://pkg.jenkins.io/debian/jenkins-ci.org.key | apt-key add -
RUN echo deb http://pkg.jenkins-ci.org/debian binary/ > /etc/apt/sources.list.d/jenkins.list
RUN sudo apt-get update
RUN sudo apt-get install -y jenkins
RUN sudo /etc/init.d/jenkins start
COPY ./app /app
CMD ["python3","/app/main.py"]
I run this container with the following:
docker build -t jenkins_test .
docker run --name jenkins_test_container -tid -p 5000:5000 -p 8080:8080 jenkins_test:latest
I am able to start flask and install Jenkins, however, when running, Jenkins is not running. curl localhost:8080 is not successful.
In the log output, I am able to see:
Correct java version found
* Starting Jenkins Automation Server jenkins [ OK ]
However, it's still not running.
I can ssh into the container and manually run sudo /etc/init.d/jenkins start to start it, but I want it to start on docker run or docker build.
I have also tried putting sudo /etc/init.d/jenkins start in the CMD portion of the Docker file:
CMD python3 /app/main.py; sudo /etc/init.d/jenkins start
With this, I am able to curl Flask, but still not Jenkins.
How can I get Jenkins to start automatically?
You have some points that you need to be aware of:
No need to use sudo as the default user is root already.
In order to run multiple service in the same container you need to use any kind of service manager like Supervisord. Jenkins is not running because the CMD is the main entry point for your container so only flask should be running. Check the following link in order to know how to start multiple service in docker.
RUN will be executed only during the build process unlike CMD which will be executed each time you start a container from that image.
Combine all the RUN lines together as possible in order to minimize the build layers which lead to a smaller docker image.
Regarding the usage of this:
CMD python3 /app/main.py; sudo /etc/init.d/jenkins start
It does not work for you because this command python3 /app/main.py is not running as a background process so this command sudo /etc/init.d/jenkins start wont run until the previous command is done.
I was only able to get this to work by starting Jenkins in the CMD portion, but needed to start Jenkins before Flask since Flask would continuously run and the next command would never execute:
Did not work:
CMD python3 /app/main.py; sudo /etc/init.d/jenkins start
This did work:
CMD sudo /etc/init.d/jenkins start; python3 /app/main.py
EDIT:
I believe putting it in the RUN portion would not work because container would build but not save the any running services. I'm not sure if containers can be saved and loaded with running processes like that but I might be wrong. Would appreciate clarification if so.
It seems like a thing that should be in RUN so if anyone knows why that didn't work or some best practices, would also appreciate the info.

uwsgi-nginx in docker not works

I have Dockerfile like:
FROM python:3.6.5-jessie
MAINTAINER twitter myname
RUN apt-get update
RUN apt-get install -y git
RUN apt-get install -y nginx
RUN pip install --upgrade pip
RUN git clone https://github.com/hongmingu/requirements
RUN pip install -r /requirements/requirements_django.txt
RUN apt-get install -y vim
RUN mkdir -p /uwsgi_log
RUN git clone https://github.com/hongmingu/smaple_django
RUN apt-get install -y nginx
COPY ./nginx.conf /etc/nginx/nginx.conf
COPY ./uwsgi.ini /uwsgi.ini # it runs in daemonized mode
# These files are just setting files. nginx get request at port 8000 and uwsgi runs django project.
RUN uwsgi --ini /uwsgi.ini
RUN service nginx restart
CMD ["python3"]
I think these 2 lines not work:
RUN uwsgi --ini /uwsgi.ini
RUN service nginx restart
Because When I build it and run it with linux command: sudo docker run --rm -it -p 8080:8000 hongmingu/smaple:0.1 /bin/bash my 127.0.0.1:8080 does not work. But, When I attach container and type command manually like, uwsgi --ini /uwsgi.ini and service nginx restart, It works well.
So, Is it impossible to run uwsgi, nginx in Dockerfile?
I want to do it so that I hope I don't need to run uwsgi and nginx manually.
Where did I make fault? Is there any good way to do this?
This docker image(hongmingu/smaple:0.1) is here: https://cloud.docker.com/u/hongmingu/repository/docker/hongmingu/smaple
You misunderstood the RUN instruction
The RUN instruction will execute any commands in a new layer on top of
the current image and commit the results
It's used to build your image, it is not docker run which executes the command in the container.
The solutions involves to execute those 2 lines in the CMD or ENTRYPOINT with a shell script. uwsgi has also to be daemonized. Checkout this image https://github.com/tiangolo/uwsgi-nginx-docker

Resources