Duplicate Rabbit Conf - docker

I'm running RabbitMQ v3.8.18 inside Docker running on EKS v1.20.
The rabbitmq.conf file is generated using a helpers.tpl function. In turn this is copied into the Helm deploy process and can be seen on disk.
However - about 1 minute later a second rabbitmq.conf file is created, it has some additional and duplicate config, and is named "rabbitmq.conf.conf"
Any pointers or tips on why this is happening would be much appreciated :~)
Dockerfile
FROM rabbitmq:${SOURCE_VERSION}
ARG SOURCE_VERSION=3.8.18
USER root
RUN apt-get update && apt-get install -y lsof htop net-tools strace curl dnsutils procps jq vim netcat
RUN chown -R rabbitmq:rabbitmq /usr/local/lib/erlang /opt/rabbitmq

Related

Dockerfile Preload Files & Volumes

I'm not sure that I am using the correct terminology here but I thought I would at least try to explain my issue:
Goal:
I want to create a docker image that people can use that will have all the files preloaded and ready to go because they will have to make an edit to a config.json file basically. So I want people to be able to map their docker-compose (or just CLI) to something like "/my/host/path/config:/config" and when they spin up the image, all the files will be available at that location on their host machine in persistent storage.
Issue:
When you spin up the image for the first time, the directories get created on the host but there are no files for end-user to modify. So they are left with manually copying files into this folder to make it work and that is not acceptable in my humble opinion.
Quick overview of the image:
Python script that uses Selenium to perform some actions on a website
Dockerfile:
FROM python:3.9.2
RUN apt-get update
RUN apt-get install -y cron wget apt-transport-https
# install google chrome
RUN wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add -
RUN sh -c 'echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google-chrome.list'
RUN apt-get -y update
RUN apt-get install -y google-chrome-stable
# install chromedriver
RUN apt-get install -yqq unzip
RUN wget -O /tmp/chromedriver.zip http://chromedriver.storage.googleapis.com/`curl -sS chromedriver.storage.googleapis.com/LATEST_RELEASE`/chromedriver_linux64.zip
RUN unzip /tmp/chromedriver.zip chromedriver -d /usr/local/bin/
# Set TZ
RUN apt-get install -y tzdata
ENV TZ=America/New_York
RUN mkdir -p /config
COPY . /config
WORKDIR /config
RUN pip install --no-cache-dir -r requirements.txt
# set display port to avoid crash
ENV DISPLAY=:99
# Custom Env Vars
ENV DOCKER_IMAGE=true
# Setup Volumes
VOLUME [ "/config" ]
# Run the command on container startup
CMD ["python3", "-u", "/config/RunTests.py"]
Any help would be greatly appreciated.
This is not how docker bind mount work. Docker bind mounts use the mount system call under the hood and it hides the content of the folder inside your containers when a host folder in bind mount.
Running this command docker run -v /my/config:/config container will always hide (override) the content inside your container.
On the contrary if you use empty docker volumes(created by docker volume command), Docker will copy the files to the volume before binding it.
So docker run config_volume:/config container will copy you config files into you volume the first time. Then you can use the volume with volume-from or mount it on another container.
To learn more about this take a look at this issue.
Another workaround is to bind you volume to a folder while creating it. More info here.
docker volume create --driver local \
--opt type=none \
--opt device=$configVolumePath \
--opt o=bind \
config_vol
For me the best solution is to copy or symlink your configuration files on container startup.
You can do so by modifying your add the cp or ln command into your entrypoint script.

Installing Kubernetes in Docker container

I want to use Kubeflow to check it out and see if it fits my projects. I want to deploy it locally as a development server so I can check it out, but I have Windows on my computer and Kubeflow only works on Linux. I'm not allowed to dual boot this computer, I could install a virtual machine, but I thought it would be easier to use docker, and oh boy was I wrong. So, the problem is, I want to install Kubernetes in a docker container, right now this is the Dockerfile I've written:
# Docker file with local deployment of Kubeflow
FROM ubuntu:18.04
ENV USER=Joao
ENV PASSWORD=Password
ENV WK_DIR=/home/${USER}
# Setup Ubuntu
RUN apt-get update -y
RUN apt-get install -y conntrack sudo wget
RUN useradd -rm -d /home/${USER} -s /bin/bash -g root -G sudo -u 1001 -p ${PASSWORD} ${USER}
WORKDIR ${WK_DIR}
# Installing Docker CE
RUN apt-get install -y apt-transport-https ca-certificates curl software-properties-common
RUN curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
RUN add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable"
RUN apt-get update -y
RUN apt-get install -y docker-ce docker-ce-cli containerd.io
# Installing Kubectl
RUN curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.15.0/bin/linux/amd64/kubectl
RUN chmod +x ./kubectl
RUN mv ./kubectl /usr/local/bin/kubectl
# Installing Minikube
RUN curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
RUN install minikube-linux-amd64 /usr/local/bin/minikube
ENV PATH="${PATH}:${WK_DIR}"
COPY start.sh start.sh
CMD sh start.sh
With this, just to make the deployment easier, I also have a docker-compose.yaml that looks like this:
services:
kf-local:
build: .
volumes:
- path/to/folder:/usr/kubeflow
privileged: true
And start.sh looks like this:
service docker start
minikube start \
--extra-config=apiserver.service-account-issuer=api \
--extra-config=apiserver.service-account-signing-key-file=/var/lib/minikube/certs/apiserver.key \
--extra-config=apiserver.service-account-api-audiences=api \
--driver=docker
The problem is, whenever I try running this I get the error:
X Exiting due to DRV_AS_ROOT: The "docker" driver should not be used with root privileges.
I've tried creating a user and running it from there also but then I'm not being able to run sudo, any idea how I could install Kubernetes on a Docker container?
As you thought you are right in case of using VM and that be easy to test it out.
Instead of setting up Kubernetes on docker you can use Linux base container for development testing.
There is linux container available name as LXC container. Docker is kind of application container while in simple words LXC is like VM for local development testing. you can install the stuff into rather than docker setting up application inside image.
read some details about lxc : https://medium.com/#harsh.manvar111/lxc-vs-docker-lxc-101-bd49db95933a
you can also run it on windows and try it out at : https://linuxcontainers.org/
If you have read the documentation of Kubeflow there is also one option multipass
Multipass creates a Linux virtual machine on Windows, Mac or Linux
systems. The VM contains a complete Ubuntu operating system which can
then be used to deploy Kubernetes and Kubeflow.
Learn more about Multipass : https://multipass.run/#install
Insufficient user permissions on the docker groups and minikube directory cause this error ("X Exiting due to DRV_AS_ROOT: The "docker" driver should not be used with root privileges.").
You can fix that error by adding your user to the docker group and setting permissions to the minikube profile directory (change the $USER with your username in the two commands below):
sudo usermod -aG docker $USER && newgrp docker
sudo chown -R $USER $HOME/.minikube; chmod -R u+wrx $HOME/.minikube

issue in creating docker image from docker file

Created a Docker file in oreder to install Tomcat server from Unix as bashe os
My Dockerfile:
FROM ubuntu
RUN apt-get update && apt-get upgrade -y #to update os
RUN apt-get dist-upgrade
RUN apt-get install build-essential
RUN apt-get install openjdk-8-jdk # to install java 8
RUN apt-get wget -y #to install wget package
RUN apt-get wget https://mirrors.estointernet.in/apache/tomcat/tomcat-9/v9.0.37/bin/apache-tomcat-9.0.37.tar.gz #to download tomcat
RUN tar -xvzf apache-tomcat-9.0.37 # unzipping the tomcat
RUN mkdir tomcat # craeting tomacat directory
RUN cp apache-tomcat-9.0.37/* tomcat # copying tomact files to tomact directory
Command to create Docker Image from Docker file:
docker build -t [img name] -f [file name] .
On execution, while installing java package am getting like this:
'''After this operation, 242 MB of additional disk space will be used.
Do you want to continue? [Y/n] Y'''
You are getting the prompt because the command is awaiting user input for whether or not to install a package. The -y flag you're using for a few of them (like wget) allows bash to assume a yes. Add this flag to all your installation commands.
By the way, there's quite a few potential issues with the Dockerfile you posted.
For example, you have RUN apt-get wget ...
Are you sure that is what you want to do, and not just RUN wget ...? Unless wget is a command that apt-get takes, which it isn't, it will cause unexpected behavior.
You also seem to be missing the command to start the Tomcat server, which can make it so that nothing happens when you attempt to run the image.
I think you should add DEBIAN_FRONTEND=noninteractive when running the apt-get commands, something like this:
RUN apt-get update && \
DEBIAN_FRONTEND=noninteractive apt-get install build-essential -y
Also, it's considered bad practice to use multiple RUN steps which could be consolidated into one. More about Dockerfile best practices can be found here.

Jenkins not starting in docker (Dockerfile included)

I am attempting to build a simple app with Jenkins in a docker container. I have the following Dockerfile:
FROM ubuntu:trusty
# Install dependencies for Flask app.
RUN sudo apt-get update
RUN sudo apt-get install -y vim
RUN sudo apt-get install -y curl
RUN sudo apt-get install -y python3-pip
RUN pip3 install flask
# Install dependencies for Jenkins (Java).
# Install Java 1.8.
RUN sudo apt-get install -y python-software-properties debconf-utils
RUN sudo apt-get install -y software-properties-common
RUN sudo add-apt-repository -y ppa:webupd8team/java
RUN sudo apt-get update
RUN echo "oracle-java8-installer shared/accepted-oracle-license-v1-1 select true" | sudo debconf-set-selections
RUN sudo apt-get install -y oracle-java8-installer
# Install, start Jenkins.
RUN sudo apt-get install -y wget
RUN wget -q -O - https://pkg.jenkins.io/debian/jenkins-ci.org.key | apt-key add -
RUN echo deb http://pkg.jenkins-ci.org/debian binary/ > /etc/apt/sources.list.d/jenkins.list
RUN sudo apt-get update
RUN sudo apt-get install -y jenkins
RUN sudo /etc/init.d/jenkins start
COPY ./app /app
CMD ["python3","/app/main.py"]
I run this container with the following:
docker build -t jenkins_test .
docker run --name jenkins_test_container -tid -p 5000:5000 -p 8080:8080 jenkins_test:latest
I am able to start flask and install Jenkins, however, when running, Jenkins is not running. curl localhost:8080 is not successful.
In the log output, I am able to see:
Correct java version found
* Starting Jenkins Automation Server jenkins [ OK ]
However, it's still not running.
I can ssh into the container and manually run sudo /etc/init.d/jenkins start to start it, but I want it to start on docker run or docker build.
I have also tried putting sudo /etc/init.d/jenkins start in the CMD portion of the Docker file:
CMD python3 /app/main.py; sudo /etc/init.d/jenkins start
With this, I am able to curl Flask, but still not Jenkins.
How can I get Jenkins to start automatically?
You have some points that you need to be aware of:
No need to use sudo as the default user is root already.
In order to run multiple service in the same container you need to use any kind of service manager like Supervisord. Jenkins is not running because the CMD is the main entry point for your container so only flask should be running. Check the following link in order to know how to start multiple service in docker.
RUN will be executed only during the build process unlike CMD which will be executed each time you start a container from that image.
Combine all the RUN lines together as possible in order to minimize the build layers which lead to a smaller docker image.
Regarding the usage of this:
CMD python3 /app/main.py; sudo /etc/init.d/jenkins start
It does not work for you because this command python3 /app/main.py is not running as a background process so this command sudo /etc/init.d/jenkins start wont run until the previous command is done.
I was only able to get this to work by starting Jenkins in the CMD portion, but needed to start Jenkins before Flask since Flask would continuously run and the next command would never execute:
Did not work:
CMD python3 /app/main.py; sudo /etc/init.d/jenkins start
This did work:
CMD sudo /etc/init.d/jenkins start; python3 /app/main.py
EDIT:
I believe putting it in the RUN portion would not work because container would build but not save the any running services. I'm not sure if containers can be saved and loaded with running processes like that but I might be wrong. Would appreciate clarification if so.
It seems like a thing that should be in RUN so if anyone knows why that didn't work or some best practices, would also appreciate the info.

AEM 6.0 on docker - Dbus connection error

I'm trying to dockerize an AEM 6.0 installation, and this is the Dockerfile for my author.
from centos:latest
COPY aem6.0-author-p4502.jar /AEM/aem/author/aem6.0-author-p4502.jar
COPY license.properties /AEM/aem/author/license.properties
RUN yum install dnsmasq -y
RUN systemctl enable dnsmasq
RUN yum install initscripts -y
RUN (cd /lib/systemd/system/sysinit.target.wants/; for i in *; do [ $i == systemd-tmpfiles-setup.service ] || rm -f $i; done); \
rm -f /lib/systemd/system/multi-user.target.wants/*;\
rm -f /etc/systemd/system/*.wants/*;\
rm -f /lib/systemd/system/local-fs.target.wants/*; \
rm -f /lib/systemd/system/sockets.target.wants/*udev*;\
rm -f /lib/systemd/system/sockets.target.wants/*initctl*;\
rm -f /lib/systemd/system/basic.target.wants/*;\
rm -f /lib/systemd/system/anaconda.target.wants/*;
WORKDIR /AEM/aem/author
RUN yum install wget -y
RUN wget --no-cookies --no-check-certificate --header "Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" "http://download.oracle.com/otn-pub/java/jdk/8u151-b12/e758a0de34e24606bca991d704f6dcbf/jdk-8u151-linux-x64.rpm"
RUN yum localinstall jdk-8u151-linux-x64.rpm -y
RUN java -XX:MaxPermSize=256m -Xmx512M -jar aem6.0-author-p4502.jar -unpack
COPY aem6 /etc/init.d/aem6
RUN chkconfig --add aem6
RUN yum -y install initscripts && yum update -y & yum clean all
RUN chown -R $USER:$(id -G) /etc/init.d
RUN chmod 777 -R /etc/init.d/aem6
RUN systemctl enable aem6.service
RUN service aem6 start
VOLUME /sys/fs/cgroup
CMD /usr/sbin/init
The build fails on starting the service, with the error - failed to get Dbus connection error. I haven't been able to figure out how to fix it.
I've tried these
- https://github.com/CentOS/sig-cloud-instance-images/issues/45
- https://hub.docker.com/_/centos/
Here, the problem is that you're trying to start the aem service during the "build" phase, with this statement:
RUN service aem6 start
This is problematic for a number of reasons. First, you're building an image. Starting a service at this stage is pointless...when the build process completes, nothing is running. An image is just a collection of files. You don't have any processes until you boot a container, at which point your CMD and ENTRYPOINT influence what is running.
Another problem is that at this stage, nothing else is running inside the container environment. The service command in this case is trying to communicate with systemd using the dbus api, but neither of those services are running.
There is a third slightly more subtle problem: the solution you've chosen relies on systemd, the standard CentOS process manager, and as far as things go you've configured things correctly (by both enabling the service with systemctl enable ... and by starting /sbin/init in your CMD statement). However, running systemd in a container can be tricky, although it is possible. In the past, systemd required the container to run with the --privileged flag; I'm not sure if this is necessary any more.
If you weren't running multiple processes (dnsmasq and aem) inside the container, the simplest solution would be to start the aem service directly, rather than relying on a process manager. This would reduce your Dockerfile to something like:
FROM centos:latest
COPY aem6.0-author-p4502.jar /AEM/aem/author/aem6.0-author-p4502.jar
COPY license.properties /AEM/aem/author/license.properties
WORKDIR /AEM/aem/author
RUN yum install wget -y
RUN wget --no-cookies --no-check-certificate --header "Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" "http://download.oracle.com/otn-pub/java/jdk/8u151-b12/e758a0de34e24606bca991d704f6dcbf/jdk-8u151-linux-x64.rpm"
RUN yum localinstall jdk-8u151-linux-x64.rpm -y
RUN java -XX:MaxPermSize=256m -Xmx512M -jar aem6.0-author-p4502.jar -unpack
CMD some commandline to start aem
If you actually require dnsmasq, you could run it in a second container (potentially sharing the same network environment as the aem container).

Resources