Enabling network manager inside docker - docker

I am preparing a test automation which require me to install network manager so that the code api(which uses python3-networkmanager) could be tested.
In the docker file, I tried installing:
apt-get install dbus \
network-manager
start receiving error:
networkmanager.systems do not have hostname property.
I looked for solutions, but appears that will require:
Privilege user (cannot use privilege user, project requirement)
Reboot after installing same. (in docker, hence, can't reboot)
This leaves me with an only option for mocking debian networkmanager that can communicate with python3-networkmanager.
Trying to figure out, how I can mock same?

RUN apt-get update && apt-get install -y \
network-manager
worked for me.

I would like to contribute as I had to spend some time getting it to work.
Inside the dockerfile you have to add:
RUN apt-get update && apt-get install -y network-manager dbus
Also, I added a script to start the network manager:
#!/bin/bash
service dbus start
service NetworkManager start
Then in the Dockerfile you have to call this start script at the end:
COPY start_script.sh /etc/init/
ENTRYPOINT ["/etc/init/start_script.sh"]
Now you can build your container and run it like this:
docker run --net="host" -v /run/dbus/system_bus_socket:/run/dbus/system_bus_socket container
For me, it is enough to work with OrangePi and Docker without a privileged container.

Related

How to run command on container startup, and keep container running after command is done?

I have a Dockerfile, which is meant to use script1, like so:
# Pull from Debian
FROM debian
# Update apt and install dependencies
RUN apt update
RUN apt -y upgrade
RUN apt -y install wget curl
# Download script1.sh
RUN wget -O ./script1.sh https://example.com
# Make script1.sh executable
RUN chmod +x ./script1.sh
Currently, I can:
Build this Dockerfile into an image
Run said image in a container
Open a CLI in said container, and run script1 manually (with bash ./script1.sh)
The script runs, and the container stays open.
However, I'd like to automatically run this script on container startup.
So I tried to change my Dockerfile to this:
# Pull from Debian
FROM debian
# Update apt and install dependencies
RUN apt update
RUN apt -y upgrade
RUN apt -y install wget curl
# Download script1.sh
RUN wget -O ./script1.sh https://example.com
# Make script1.sh executable
RUN chmod +x ./script1.sh
# Run script1.sh on startup
CMD bash ./script1.sh
However, when I do this, the container only stays open for a little bit, and then exits right away.
I suspect it exits as soon as script1 is done...
I also tried ENTRYPOINT, without much success.
Why does my container stay open if I open a CLI and run the script manually, but doesn't stay open if I try to automatically run it at startup?
And how can I run the script automatically on container startup, and keep the container from exiting right away?
An old Docker (v2) tricks to prevent premature container closing consisted in letting run an "infinite" loop command in it, such as:
CMD tail -f /dev/null

Running PHP scripts on Synology from within Docker container

Today I had to move my Domoticz/jadahl/Synology setup to one that runs in a Docker container. While this didn’t give any problems, I have one issue. Domoticz allows scripts to be executed when a switch is toggled. I have been running PHP scripts for years this way and I was wondering if it is possible to run a script located on the Synology from the Docker container. Totally new to Docker so forgive any stupid questions.
If not, any tips on how to approach this so I can get back to my dayjob?
Solved this by creating my own image:
FROM domoticz/domoticz:latest
RUN apt-get update
RUN apt-get upgrade -y
RUN apt-get install etherwake wget curl php-cli php-xml php-soap -y

Back-off restarting failed container openshift kubernetes

I have a Dockerfile running Kong-api to deploy on openshift. It build okay, but when I check pods I get Back-off restarting failed container. Here is my dockerfile
FROM ubuntu:18.04
RUN apt-get update && apt-get install -y apt-transport-https curl lsb-core
RUN echo "deb https://kong.bintray.com/kong-deb `lsb_release -sc` main" | tee -a /etc/apt/sources.list
RUN curl -o bintray.key https://bintray.com/user/downloadSubjectPublicKey?username=bintray
RUN apt-key add bintray.key
RUN apt-get update && apt-get install -y kong
COPY kong.conf /etc/kong/
RUN kong migrations bootstrap [-c /etc/kong/kong.conf]
EXPOSE 8000 8443 8001 8444
ENTRYPOINT ["kong", "start", "[-c /etc/kong/kong.conf]"]
Where is my wrong? Please help me. Thanks in advance
In order to make the kong start correctly, you need to execute these commands when you have an active Postgres connection:
kong migrations bootstrap && kong migrations up
Also, note that the format of the current Dockerfile is not valid if you would like
to pass options within the ENTRYPOINT you can write it like that:
ENTRYPOINT ["kong", "start","-c", "/etc/kong/kong.conf"]
Also, you need to remove this line:
RUN kong migrations bootstrap [-c /etc/kong/kong.conf]
Note that the format of the above line is not valid as RUN expects a normal shell command so using [] in this case is not correct.
So as you deploy to Openshift there are several ways to achieve what you need.
You can make use of initContainers which allows you to execute the required commands before the actual service is up.
You can check the official helm chart for Kong in order to know how it works or use helm to install Kong itself.

Creation of Dockerfile for Android GitLab CI

I'm creating my own Dockerfile for Runner, which is about to work in Gitlab CI as Android project runner. The problem is, that I'm about to connect the physical device to a machine, on which I'm about to deploy that runner. As usually with Linux machine, I was trying to add 51-android.rules into /etc/dev/rules.d as in this tutorial: Udev Setup
During docker build . command execution, I got error:
/bin/sh: 1: udevadm: not found
My questions are:
1) Is it possible, to connect the physical Android device to docker-running OS?
2) If 1) yes, where is my mistake?
The problematic dockerfile part:
FROM ubuntu:latest
#Ubuntu setup
RUN apt-get update
RUN apt-get install -y wget
...
#Setup Android Udev Rules
RUN wget https://raw.githubusercontent.com/M0Rf30/android-udev-rules/master/51-android.rules
RUN mv -y `pwd`/51-android.rules /etc/udev/rules.d
RUN chmod a+r /etc/udev/rules.d/51-android.rules
RUN udevadm control --reload-rules
RUN service udev restart
RUN usermod -a -G plugdev `whoami`
RUN adb kill-server
RUN adb devices
#Cleaning
RUN apt-get clean
The philosophy of Docker is to have one process running per container. There usually is no Init System so you cannot use services as you are used to.
I don't know if it's possible to achieve what you are trying to do but I think that you want the udev-rules on the host and add the device when you are starting it: https://docs.docker.com/engine/reference/commandline/run/#add-host-device-to-container-device
Also you may want to read https://docs.docker.com/engine/userguide/eng-image/dockerfile_best-practices/#/apt-get
Every RUN creates a new layer, only adding information to the container.
Having said that, you probably want to have adb devices as the ENTRYPOINT or CMD of your container.

Get ride of Vmware and move to Docker, how to properly setup the Dockerfile or the cointainer?

I am a PHP developer so most of the time for test any application I am working on what I do is:
Create a Vmware VM and install a complete OS: most of the time I like to use CentOS
Setup everything on the VM meaning: Apache and modules, PHP and modules and MySQL or MariaDB
Anytime I start a new VM from scratch there are a few steps I run:
# Install EPEL and Remi Repos
wget https://dl.fedoraproject.org/pub/epel/epel-release-latest-6.noarch.rpm
wget http://rpms.remirepo.net/enterprise/remi-release-6.rpm
rpm -Uvh remi-release-6.rpm epel-release-latest-6.noarch.rpm
# Install Apache, PHP and its dependencies
yum -y install php php-common php-cli php-fpm php-gd php-intl php-mbstring php-mcrypt php-opcache php-pdo php-pear php-pecl-apcu php-imagick php-pecl-xdebug php-pgsql php-xml php-mysqlnd php-pecl-zip php-process php-soap
# Start Apache on 235 run level
chkconfig --levels 235 httpd on
# Setup MariaDB repos
nano /etc/yum.repos.d/MariaDB.repo
# Write this inside the MariaDB.repo file
[mariadb]
name = MariaDB
baseurl = http://yum.mariadb.org/5.5/centos6-amd64
gpgkey=https://yum.mariadb.org/RPM-GPG-KEY-MariaDB
gpgcheck=1
# Install MariaDB
yum -y install MariaDB MariaDB-server
# Start service
service mysql start
# Start MariaDB on run level 235
chkconfig --levels 235 mysql on
# Setup MariaDB (this is interactive)
/usr/bin/mysql_secure_installation
# A few more steps
This is annoying task and I need to do all the time (when I mess up the VM trying new things and changing here and there. So here is where Docker, I think, comes to save. After read a few I know the basic of Docker and I have pull a CentOS image by running docker run -it centos but that's all just a bash shell and a basic CentOS image so is my task to install & setup everything.
Here are my doubts about Docker and how to handle this repetitive and common tasks:
Should I create a Dockerfile (this is my first Dockerfile so perhaps the order is not the right or I am complete mistaken) with the content below and put all the repetitive tasks inside run-setup.sh file?
FROM centos:latest
MAINTAINER MyName <MyEmail>
RUN yum -y update && yum clean all
ADD run-setup.sh /run-setup.sh
RUN chmod -v +x /run-setup.sh
CMD ["/run-setup.sh"]
EXPOSE 80
Should I run the repetitive tasks by hand as I do before on the VM?
The command /usr/bin/mysql_secure_installation is complete interactive since I need to answer a few questions and set a password, how to deal with this one or any other interactive?
Any better idea?
I will start answering your questions:
Yes, you could start with a Dockerfile. However, I recommend you using the commands straight into the file so that its easier to maintain in the future. An e.g. could be Dockerfile of apache from github.
Repetitive tasks, no. You could save the images of the containers by pushing your images to a public registry like docker hub or you could host a private one which can be a docker container itself.
Inter activeness should be worked around somehow with command line options, bash read or passing a file if possible etc. I do not think there is a straight answer to this.
Better ideas, the usual pattern is to host the Dockerfile in a github or bitbucket public repository and then configure automated builds against docker hub. They all come for free :)
There are also many live working examples you could get from the docker hub. Start searching for an image, choose the most popular/offical one, then you must have links to the Dockerfile.
Let me know how it goes.

Resources