AEM 6.0 on docker - Dbus connection error - docker

I'm trying to dockerize an AEM 6.0 installation, and this is the Dockerfile for my author.
from centos:latest
COPY aem6.0-author-p4502.jar /AEM/aem/author/aem6.0-author-p4502.jar
COPY license.properties /AEM/aem/author/license.properties
RUN yum install dnsmasq -y
RUN systemctl enable dnsmasq
RUN yum install initscripts -y
RUN (cd /lib/systemd/system/sysinit.target.wants/; for i in *; do [ $i == systemd-tmpfiles-setup.service ] || rm -f $i; done); \
rm -f /lib/systemd/system/multi-user.target.wants/*;\
rm -f /etc/systemd/system/*.wants/*;\
rm -f /lib/systemd/system/local-fs.target.wants/*; \
rm -f /lib/systemd/system/sockets.target.wants/*udev*;\
rm -f /lib/systemd/system/sockets.target.wants/*initctl*;\
rm -f /lib/systemd/system/basic.target.wants/*;\
rm -f /lib/systemd/system/anaconda.target.wants/*;
WORKDIR /AEM/aem/author
RUN yum install wget -y
RUN wget --no-cookies --no-check-certificate --header "Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" "http://download.oracle.com/otn-pub/java/jdk/8u151-b12/e758a0de34e24606bca991d704f6dcbf/jdk-8u151-linux-x64.rpm"
RUN yum localinstall jdk-8u151-linux-x64.rpm -y
RUN java -XX:MaxPermSize=256m -Xmx512M -jar aem6.0-author-p4502.jar -unpack
COPY aem6 /etc/init.d/aem6
RUN chkconfig --add aem6
RUN yum -y install initscripts && yum update -y & yum clean all
RUN chown -R $USER:$(id -G) /etc/init.d
RUN chmod 777 -R /etc/init.d/aem6
RUN systemctl enable aem6.service
RUN service aem6 start
VOLUME /sys/fs/cgroup
CMD /usr/sbin/init
The build fails on starting the service, with the error - failed to get Dbus connection error. I haven't been able to figure out how to fix it.
I've tried these
- https://github.com/CentOS/sig-cloud-instance-images/issues/45
- https://hub.docker.com/_/centos/

Here, the problem is that you're trying to start the aem service during the "build" phase, with this statement:
RUN service aem6 start
This is problematic for a number of reasons. First, you're building an image. Starting a service at this stage is pointless...when the build process completes, nothing is running. An image is just a collection of files. You don't have any processes until you boot a container, at which point your CMD and ENTRYPOINT influence what is running.
Another problem is that at this stage, nothing else is running inside the container environment. The service command in this case is trying to communicate with systemd using the dbus api, but neither of those services are running.
There is a third slightly more subtle problem: the solution you've chosen relies on systemd, the standard CentOS process manager, and as far as things go you've configured things correctly (by both enabling the service with systemctl enable ... and by starting /sbin/init in your CMD statement). However, running systemd in a container can be tricky, although it is possible. In the past, systemd required the container to run with the --privileged flag; I'm not sure if this is necessary any more.
If you weren't running multiple processes (dnsmasq and aem) inside the container, the simplest solution would be to start the aem service directly, rather than relying on a process manager. This would reduce your Dockerfile to something like:
FROM centos:latest
COPY aem6.0-author-p4502.jar /AEM/aem/author/aem6.0-author-p4502.jar
COPY license.properties /AEM/aem/author/license.properties
WORKDIR /AEM/aem/author
RUN yum install wget -y
RUN wget --no-cookies --no-check-certificate --header "Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" "http://download.oracle.com/otn-pub/java/jdk/8u151-b12/e758a0de34e24606bca991d704f6dcbf/jdk-8u151-linux-x64.rpm"
RUN yum localinstall jdk-8u151-linux-x64.rpm -y
RUN java -XX:MaxPermSize=256m -Xmx512M -jar aem6.0-author-p4502.jar -unpack
CMD some commandline to start aem
If you actually require dnsmasq, you could run it in a second container (potentially sharing the same network environment as the aem container).

Related

How To Start MariaDB And Keep it Running Centos Based Docker Image

I'm trying to create a docker file (base os must be Centos) that will install mariadb, start mariadb, and keep mariadb running. So that I can use the container in gitlab to run my integration tests (Java). This is what I have so far
FROM centos:7
ENV container docker
RUN (cd /lib/systemd/system/sysinit.target.wants/; for i in *; do [ $i == \
systemd-tmpfiles-setup.service ] || rm -f $i; done); \
rm -f /lib/systemd/system/multi-user.target.wants/*;\
rm -f /etc/systemd/system/*.wants/*;\
rm -f /lib/systemd/system/local-fs.target.wants/*; \
rm -f /lib/systemd/system/sockets.target.wants/*udev*; \
rm -f /lib/systemd/system/sockets.target.wants/*initctl*; \
rm -f /lib/systemd/system/basic.target.wants/*;\
rm -f /lib/systemd/system/anaconda.target.wants/*;
VOLUME [ "/sys/fs/cgroup" ]
CMD ["/usr/sbin/init"]
RUN rm /bin/sh && ln -s /bin/bash /bin/sh
# Install epel and java
RUN yum install -y java-1.8.0-openjdk java-1.8.0-openjdk-devel wget
ENV JAVA_HOME /usr/lib/jvm/java-1.8.0-openjdk/
EXPOSE 8080
EXPOSE 3306
# install mariadb
RUN yum -y install mariadb
RUN yum -y install mariadb-server
RUN systemctl start mariadb
ENTRYPOINT tail -f /dev/null
The error I'm getting is
Failed to get D-Bus connection: Operation not permitted
You can do something like this:
FROM centos/mariadb-102-centos7
USER root
# Install epel and java
RUN yum install -y java-1.8.0-openjdk java-1.8.0-openjdk-devel wget
ENV JAVA_HOME /usr/lib/jvm/java-1.8.0-openjdk/
You can mount your code folder into this container and execute it with docker exec.
It is recommended however you use two different containers: one for the db and one for your code. You can then pass the code container the env vars required to connect to the db container.
nothing is running by default in containers including systemd so you cannot use systemd to start mariadb
if we reference the official mariadb dockerfile, we can find that you can start mariadb by adding CMD ["mysqld"] to our dockerfile.
you must also make sure to install mariadb in your container with RUN yum -y mariadb-server mariadb-client as it is not installed by default either

Jenkins not starting in docker (Dockerfile included)

I am attempting to build a simple app with Jenkins in a docker container. I have the following Dockerfile:
FROM ubuntu:trusty
# Install dependencies for Flask app.
RUN sudo apt-get update
RUN sudo apt-get install -y vim
RUN sudo apt-get install -y curl
RUN sudo apt-get install -y python3-pip
RUN pip3 install flask
# Install dependencies for Jenkins (Java).
# Install Java 1.8.
RUN sudo apt-get install -y python-software-properties debconf-utils
RUN sudo apt-get install -y software-properties-common
RUN sudo add-apt-repository -y ppa:webupd8team/java
RUN sudo apt-get update
RUN echo "oracle-java8-installer shared/accepted-oracle-license-v1-1 select true" | sudo debconf-set-selections
RUN sudo apt-get install -y oracle-java8-installer
# Install, start Jenkins.
RUN sudo apt-get install -y wget
RUN wget -q -O - https://pkg.jenkins.io/debian/jenkins-ci.org.key | apt-key add -
RUN echo deb http://pkg.jenkins-ci.org/debian binary/ > /etc/apt/sources.list.d/jenkins.list
RUN sudo apt-get update
RUN sudo apt-get install -y jenkins
RUN sudo /etc/init.d/jenkins start
COPY ./app /app
CMD ["python3","/app/main.py"]
I run this container with the following:
docker build -t jenkins_test .
docker run --name jenkins_test_container -tid -p 5000:5000 -p 8080:8080 jenkins_test:latest
I am able to start flask and install Jenkins, however, when running, Jenkins is not running. curl localhost:8080 is not successful.
In the log output, I am able to see:
Correct java version found
* Starting Jenkins Automation Server jenkins [ OK ]
However, it's still not running.
I can ssh into the container and manually run sudo /etc/init.d/jenkins start to start it, but I want it to start on docker run or docker build.
I have also tried putting sudo /etc/init.d/jenkins start in the CMD portion of the Docker file:
CMD python3 /app/main.py; sudo /etc/init.d/jenkins start
With this, I am able to curl Flask, but still not Jenkins.
How can I get Jenkins to start automatically?
You have some points that you need to be aware of:
No need to use sudo as the default user is root already.
In order to run multiple service in the same container you need to use any kind of service manager like Supervisord. Jenkins is not running because the CMD is the main entry point for your container so only flask should be running. Check the following link in order to know how to start multiple service in docker.
RUN will be executed only during the build process unlike CMD which will be executed each time you start a container from that image.
Combine all the RUN lines together as possible in order to minimize the build layers which lead to a smaller docker image.
Regarding the usage of this:
CMD python3 /app/main.py; sudo /etc/init.d/jenkins start
It does not work for you because this command python3 /app/main.py is not running as a background process so this command sudo /etc/init.d/jenkins start wont run until the previous command is done.
I was only able to get this to work by starting Jenkins in the CMD portion, but needed to start Jenkins before Flask since Flask would continuously run and the next command would never execute:
Did not work:
CMD python3 /app/main.py; sudo /etc/init.d/jenkins start
This did work:
CMD sudo /etc/init.d/jenkins start; python3 /app/main.py
EDIT:
I believe putting it in the RUN portion would not work because container would build but not save the any running services. I'm not sure if containers can be saved and loaded with running processes like that but I might be wrong. Would appreciate clarification if so.
It seems like a thing that should be in RUN so if anyone knows why that didn't work or some best practices, would also appreciate the info.

Running systemd in docker container causes host crash

I'm trying to create a systemd based docker container, but when I try running the built container my system crashes. I think running init in the container might be causing the conflict, and is somehow conflicting with systemd on my host.
When I try to run the docker container I am logged out of my account and briefly see what looks like my system going through a boot process. My host is running Arch Linux, with linux 4.20.7.
It is only when I attempt to "boot" the container by running systemd via /sbin/init, that the problem occurs.
docker run -it \
--volume=/sys/fs/cgroup:/sys/fs/cgroup:rw \
--privileged 66304e3bc48
Dockerfile (adapted from solita/ubuntu-systemd):
FROM ubuntu:18.04
# Don't start any optional services.
RUN find /etc/systemd/system \
/lib/systemd/system \
-path '*.wants/*' \
-not -name '*journald*' \
-not -name '*systemd-tmpfiles*' \
-not -name '*systemd-user-sessions*' \
-exec rm \{} \;
RUN apt-get update && \
apt-get install --yes \
python sudo bash ca-certificates dbus systemd && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
RUN systemctl set-default multi-user.target
RUN systemctl mask dev-hugepages.mount sys-fs-fuse-connections.mount
STOPSIGNAL SIGRTMIN+3
# Workaround for docker/docker#27202, technique based on comments from docker/docker#9212
CMD ["/bin/bash", "-c", "exec /sbin/init --log-target=journal 3>&1"]
I would expect the container to just boot up running systemd, and I'm not your what I might be doing wrong.
Docker do not want to include by default Systemd inside docker because it publish itself as an application container (which means one application per container). There is an other type of containers called system containers. the most known are OpenVZ , LXC/LXD and Systemd-nspawn. all those will run a full OS with systemd as if it were a virtual machine.
and using systemd inside docker is so dangerous when compared to running systemd inside LXD
there is even a new baby called Podman which is a clone of docker but it uses by default systemd inside when you install it or when you use an image which contain systemd already like those of ubuntu cloud images http://uec-images.ubuntu.com/releases/server/bionic/release/
so my advice is to test LXD and systemd-nspawn ; and keep an eye on Podman it solves what docker do not want to solve; read this to understand https://lwn.net/Articles/676831/
references:
https://coreos.com/rkt/docs/latest/rkt-vs-other-projects.html
https://podman.io/slides/2018_10_01_Replacing_Docker_With_Podman.pdf
https://containerjournal.com/features/system-containers-vs-application-containers-difference-matter
Runtimes And the Curse of the Privileged Container
https://brauner.github.io/2019/02/12/privileged-containers.html
I ended up using the paulfantom/ubuntu-molecule Docker image.
Currently it looks like they're just installing systemd, setting some environment variables, and using the systemd binary directly as the entry point. It seems to work without the issues I mentioned in the original post.
Dockerfile
FROM ubuntu:18.04
ENV container docker
ENV LC_ALL C
ENV DEBIAN_FRONTEND noninteractive
RUN sed -i 's/# deb/deb/g' /etc/apt/sources.list
# hadolint ignore=DL3008
RUN apt-get update \
&& apt-get install -y --no-install-recommends systemd python sudo bash iproute2 net-tools \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
# hadolint ignore=SC2010,SC2086
RUN cd /lib/systemd/system/sysinit.target.wants/ \
&& ls | grep -v systemd-tmpfiles-setup | xargs rm -f $1
RUN rm -f /lib/systemd/system/multi-user.target.wants/* \
/etc/systemd/system/*.wants/* \
/lib/systemd/system/local-fs.target.wants/* \
/lib/systemd/system/sockets.target.wants/*udev* \
/lib/systemd/system/sockets.target.wants/*initctl* \
/lib/systemd/system/basic.target.wants/* \
/lib/systemd/system/anaconda.target.wants/* \
/lib/systemd/system/plymouth* \
/lib/systemd/system/systemd-update-utmp*
RUN systemctl set-default multi-user.target
ENV init /lib/systemd/systemd
VOLUME [ "/sys/fs/cgroup" ]
ENTRYPOINT ["/lib/systemd/systemd"]
To "match the host as close as possible" was the original goal of the docker-systemctl-replacement script. You can test drive scripts in a container that may be run later on a virtual machine. It allows to do some systemctl commands without an active systemd daemon.
It can also serve as an init daemon if you wish to. A systemd-enabled operating system will feel quite similar inside a container with it.

Starting Gunicorn Service in Dockerfile : Failed to get D-Bus connection: Operation not permitted

I m trying to start services(gunicorn, nginx) with my dockerfile but I got that error.
This is my dockerfile
FROM centos:centos7
RUN yum -y install epel-release
RUN yum -y --enablerepo=base clean metadata
RUN yum -y install nginx
RUN yum -y install python-pip
RUN pip install --upgrade pip
RUN yum -y install systemd;
RUN yum clean all;
COPY . /
RUN pip install --no-cache-dir -r requirements.txt
RUN ./manage.py makemigrations
RUN ./manage.py migrate
ENV container docker
RUN (cd /lib/systemd/system/sysinit.target.wants/; for i in *; do [ $i == \
systemd-tmpfiles-setup.service ] || rm -f $i; done); \
rm -f /lib/systemd/system/multi-user.target.wants/*;\
rm -f /etc/systemd/system/*.wants/*;\
rm -f /lib/systemd/system/local-fs.target.wants/*; \
rm -f /lib/systemd/system/sockets.target.wants/*udev*; \
rm -f /lib/systemd/system/sockets.target.wants/*initctl*; \
rm -f /lib/systemd/system/basic.target.wants/*;\
rm -f /lib/systemd/system/anaconda.target.wants/*;
#Gunicorn
RUN cp gunicorn_systemd /etc/systemd/system/gunicorn.service
RUN systemctl start gunicorn
RUN systemctl enable gunicorn
And this is my build command
docker build -t guni ./
Any help please ?
You are trying to interact with systemd in your build script:
RUN systemctl start gunicorn
There are a number of problems here. First, trying to "start" a service as part of the build process doesn't make any sense: you're building an image, not starting a container.
Secondly, you're trying to interact with systemd, but you're never starting systemd, and it is unlikely that you want to [1]. Since a docker container is typically a "single process" environment, you don't need any init-like process supervisor to start things for you. You just need to arrange to run the necessary command yourself.
Taking Apache httpd as an example, rather than running:
systemctl start httpd
You would run:
httpd -DFOREGROUND
This runs the webserver and ensures that it stays in the foreground (the Docker container will exit when the foreground process exits). You can surely do something similar with gunicorn.
Your container is also missing a CMD or ENTRYPOINT directive, so it's not going to do anything when you run it unless you provide an explicit command, and that's probably not the behavior you want.
[1] If you really think you need systemd, you would need to arrange to start it when the container starts (e.g, CMD /sbin/init), but systemd is not something that runs well in an unprivileged container environment. It's possible but I wouldn't recommend it.

Beanstalkd in docker

I'm building a Docker image with this Dockerfile
FROM ubuntu:12.04
ENV DEBIAN_FRONTEND noninteractive
ENV PATH /usr/local/rvm/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
# update apt
RUN echo "deb http://archive.ubuntu.com/ubuntu precise main universe" > /etc/apt/sources.list
RUN apt-get update
RUN apt-get -y dist-upgrade
RUN apt-get install -y beanstalkd
RUN sed -i 's/\#START=yes/START=yes/g' /etc/default/beanstalkd
EXPOSE 11300
ENTRYPOINT service beanstalkd start
The image is successfully built and then I want to create an instance:
docker run -i -d -p 11300:11300 beanstalk /bin/bash
However, when I do docker ps -a, the instance has status Exit 0. I'm assuming that this means that the instance is not running. When I try to start it or attach to it, nothing seems to be happening.
So the question is why is the container not running?
Thanks, Michal
With service beanstalkd start you are starting the server, and then exiting. You will want to run the program directly - ENTRYPOINT /usr/local/bin/beanstalkd -l 0.0.0.0 -p 11300 -b .... (etc)

Resources