How To Start MariaDB And Keep it Running Centos Based Docker Image - docker

I'm trying to create a docker file (base os must be Centos) that will install mariadb, start mariadb, and keep mariadb running. So that I can use the container in gitlab to run my integration tests (Java). This is what I have so far
FROM centos:7
ENV container docker
RUN (cd /lib/systemd/system/sysinit.target.wants/; for i in *; do [ $i == \
systemd-tmpfiles-setup.service ] || rm -f $i; done); \
rm -f /lib/systemd/system/multi-user.target.wants/*;\
rm -f /etc/systemd/system/*.wants/*;\
rm -f /lib/systemd/system/local-fs.target.wants/*; \
rm -f /lib/systemd/system/sockets.target.wants/*udev*; \
rm -f /lib/systemd/system/sockets.target.wants/*initctl*; \
rm -f /lib/systemd/system/basic.target.wants/*;\
rm -f /lib/systemd/system/anaconda.target.wants/*;
VOLUME [ "/sys/fs/cgroup" ]
CMD ["/usr/sbin/init"]
RUN rm /bin/sh && ln -s /bin/bash /bin/sh
# Install epel and java
RUN yum install -y java-1.8.0-openjdk java-1.8.0-openjdk-devel wget
ENV JAVA_HOME /usr/lib/jvm/java-1.8.0-openjdk/
EXPOSE 8080
EXPOSE 3306
# install mariadb
RUN yum -y install mariadb
RUN yum -y install mariadb-server
RUN systemctl start mariadb
ENTRYPOINT tail -f /dev/null
The error I'm getting is
Failed to get D-Bus connection: Operation not permitted

You can do something like this:
FROM centos/mariadb-102-centos7
USER root
# Install epel and java
RUN yum install -y java-1.8.0-openjdk java-1.8.0-openjdk-devel wget
ENV JAVA_HOME /usr/lib/jvm/java-1.8.0-openjdk/
You can mount your code folder into this container and execute it with docker exec.
It is recommended however you use two different containers: one for the db and one for your code. You can then pass the code container the env vars required to connect to the db container.

nothing is running by default in containers including systemd so you cannot use systemd to start mariadb
if we reference the official mariadb dockerfile, we can find that you can start mariadb by adding CMD ["mysqld"] to our dockerfile.
you must also make sure to install mariadb in your container with RUN yum -y mariadb-server mariadb-client as it is not installed by default either

Related

Permissions in Docker volume

I am struggling with permissions on docker volume, I get access denied for writing.
This is a small part of my docker file
FROM ubuntu:18.04
RUN apt-get update && \
apt-get install -y \
apt-transport-https \
build-essential \
ca-certificates \
curl \
vim && \............
RUN curl -sL https://deb.nodesource.com/setup_8.x | bash - && apt-get install -y nodejs
# Add non-root user
ARG USER=user01
RUN useradd -Um -d /home/$USER -s /bin/bash $USER && \
apt install -y python3-pip && \
pip3 install qrcode[pil]
#Copy that startup.sh into the scripts folder
COPY /scripts/startup.sh /scripts/startup.sh
#Making the startup.sh executable
RUN chmod -v +x /scripts/startup.sh
#Copy node API files
COPY --chown=user1 /node_api/* /home/user1/
USER $USER
WORKDIR /home/$USER
# Expose needed ports
EXPOSE 3000
VOLUME /data_storage
ENTRYPOINT [ "/scripts/startup.sh" ]
Also a small part of my startup.sh
#!/bin/bash
/usr/share/lib/provision.py --enterprise-seed $ENTERPRISE_SEED > config.json
Then my docker builds command:
sudo docker build -t mycontainer .
And the docker run command:
sudo docker run -v data_storage:/home/user01/.client -p 3008:3000 -itd mycontainer
The problem I have is that the Python script will create the folder: /home/user01/.client and it will copy some files in there. That always worked fine. But now I want those files, which are data files, in a volume for backup porpuses. And as I am mapping with my volume I get permissions denied, so the python script is not able to write anymore.
So at the end of my dockerfile this instructions combined with the mapping in the docker run command give me the permission denied:
VOLUME /data_storage
Any suggestions on how to resolve this? some more permissions needed for the "user01"?
Thanks
I was able to resolve my issue by removing the "volume" command from the dockerfile and just doing the mapping at the moment of executing the docker run:
sudo docker run -v data_storage:/home/user01/.client -p 3008:3000 -itd mycontainer

AEM 6.0 on docker - Dbus connection error

I'm trying to dockerize an AEM 6.0 installation, and this is the Dockerfile for my author.
from centos:latest
COPY aem6.0-author-p4502.jar /AEM/aem/author/aem6.0-author-p4502.jar
COPY license.properties /AEM/aem/author/license.properties
RUN yum install dnsmasq -y
RUN systemctl enable dnsmasq
RUN yum install initscripts -y
RUN (cd /lib/systemd/system/sysinit.target.wants/; for i in *; do [ $i == systemd-tmpfiles-setup.service ] || rm -f $i; done); \
rm -f /lib/systemd/system/multi-user.target.wants/*;\
rm -f /etc/systemd/system/*.wants/*;\
rm -f /lib/systemd/system/local-fs.target.wants/*; \
rm -f /lib/systemd/system/sockets.target.wants/*udev*;\
rm -f /lib/systemd/system/sockets.target.wants/*initctl*;\
rm -f /lib/systemd/system/basic.target.wants/*;\
rm -f /lib/systemd/system/anaconda.target.wants/*;
WORKDIR /AEM/aem/author
RUN yum install wget -y
RUN wget --no-cookies --no-check-certificate --header "Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" "http://download.oracle.com/otn-pub/java/jdk/8u151-b12/e758a0de34e24606bca991d704f6dcbf/jdk-8u151-linux-x64.rpm"
RUN yum localinstall jdk-8u151-linux-x64.rpm -y
RUN java -XX:MaxPermSize=256m -Xmx512M -jar aem6.0-author-p4502.jar -unpack
COPY aem6 /etc/init.d/aem6
RUN chkconfig --add aem6
RUN yum -y install initscripts && yum update -y & yum clean all
RUN chown -R $USER:$(id -G) /etc/init.d
RUN chmod 777 -R /etc/init.d/aem6
RUN systemctl enable aem6.service
RUN service aem6 start
VOLUME /sys/fs/cgroup
CMD /usr/sbin/init
The build fails on starting the service, with the error - failed to get Dbus connection error. I haven't been able to figure out how to fix it.
I've tried these
- https://github.com/CentOS/sig-cloud-instance-images/issues/45
- https://hub.docker.com/_/centos/
Here, the problem is that you're trying to start the aem service during the "build" phase, with this statement:
RUN service aem6 start
This is problematic for a number of reasons. First, you're building an image. Starting a service at this stage is pointless...when the build process completes, nothing is running. An image is just a collection of files. You don't have any processes until you boot a container, at which point your CMD and ENTRYPOINT influence what is running.
Another problem is that at this stage, nothing else is running inside the container environment. The service command in this case is trying to communicate with systemd using the dbus api, but neither of those services are running.
There is a third slightly more subtle problem: the solution you've chosen relies on systemd, the standard CentOS process manager, and as far as things go you've configured things correctly (by both enabling the service with systemctl enable ... and by starting /sbin/init in your CMD statement). However, running systemd in a container can be tricky, although it is possible. In the past, systemd required the container to run with the --privileged flag; I'm not sure if this is necessary any more.
If you weren't running multiple processes (dnsmasq and aem) inside the container, the simplest solution would be to start the aem service directly, rather than relying on a process manager. This would reduce your Dockerfile to something like:
FROM centos:latest
COPY aem6.0-author-p4502.jar /AEM/aem/author/aem6.0-author-p4502.jar
COPY license.properties /AEM/aem/author/license.properties
WORKDIR /AEM/aem/author
RUN yum install wget -y
RUN wget --no-cookies --no-check-certificate --header "Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" "http://download.oracle.com/otn-pub/java/jdk/8u151-b12/e758a0de34e24606bca991d704f6dcbf/jdk-8u151-linux-x64.rpm"
RUN yum localinstall jdk-8u151-linux-x64.rpm -y
RUN java -XX:MaxPermSize=256m -Xmx512M -jar aem6.0-author-p4502.jar -unpack
CMD some commandline to start aem
If you actually require dnsmasq, you could run it in a second container (potentially sharing the same network environment as the aem container).

Starting Gunicorn Service in Dockerfile : Failed to get D-Bus connection: Operation not permitted

I m trying to start services(gunicorn, nginx) with my dockerfile but I got that error.
This is my dockerfile
FROM centos:centos7
RUN yum -y install epel-release
RUN yum -y --enablerepo=base clean metadata
RUN yum -y install nginx
RUN yum -y install python-pip
RUN pip install --upgrade pip
RUN yum -y install systemd;
RUN yum clean all;
COPY . /
RUN pip install --no-cache-dir -r requirements.txt
RUN ./manage.py makemigrations
RUN ./manage.py migrate
ENV container docker
RUN (cd /lib/systemd/system/sysinit.target.wants/; for i in *; do [ $i == \
systemd-tmpfiles-setup.service ] || rm -f $i; done); \
rm -f /lib/systemd/system/multi-user.target.wants/*;\
rm -f /etc/systemd/system/*.wants/*;\
rm -f /lib/systemd/system/local-fs.target.wants/*; \
rm -f /lib/systemd/system/sockets.target.wants/*udev*; \
rm -f /lib/systemd/system/sockets.target.wants/*initctl*; \
rm -f /lib/systemd/system/basic.target.wants/*;\
rm -f /lib/systemd/system/anaconda.target.wants/*;
#Gunicorn
RUN cp gunicorn_systemd /etc/systemd/system/gunicorn.service
RUN systemctl start gunicorn
RUN systemctl enable gunicorn
And this is my build command
docker build -t guni ./
Any help please ?
You are trying to interact with systemd in your build script:
RUN systemctl start gunicorn
There are a number of problems here. First, trying to "start" a service as part of the build process doesn't make any sense: you're building an image, not starting a container.
Secondly, you're trying to interact with systemd, but you're never starting systemd, and it is unlikely that you want to [1]. Since a docker container is typically a "single process" environment, you don't need any init-like process supervisor to start things for you. You just need to arrange to run the necessary command yourself.
Taking Apache httpd as an example, rather than running:
systemctl start httpd
You would run:
httpd -DFOREGROUND
This runs the webserver and ensures that it stays in the foreground (the Docker container will exit when the foreground process exits). You can surely do something similar with gunicorn.
Your container is also missing a CMD or ENTRYPOINT directive, so it's not going to do anything when you run it unless you provide an explicit command, and that's probably not the behavior you want.
[1] If you really think you need systemd, you would need to arrange to start it when the container starts (e.g, CMD /sbin/init), but systemd is not something that runs well in an unprivileged container environment. It's possible but I wouldn't recommend it.

Issue getting memcache container to automatically start in Docker

I have a container that is being built that only really contains memcached, and I want it to start once the container is built.
This is my current Docker file -
FROM centos:7
MAINTAINER Some guy <someguy#guysome.org>
RUN yum update -y
RUN yum install -y git https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
RUN yum install -y ansible && yum clean all -y
RUN yum install -y memcached
RUN (cd /lib/systemd/system/sysinit.target.wants/; for i in *; do [ $i == systemd-tmpfiles-setup.service ] || rm -f $i; done); \
rm -f /lib/systemd/system/multi-user.target.wants/*;\
rm -f /etc/systemd/system/*.wants/*;\
rm -f /lib/systemd/system/local-fs.target.wants/*; \
rm -f /lib/systemd/system/sockets.target.wants/*udev*; \
rm -f /lib/systemd/system/sockets.target.wants/*initctl*; \
rm -f /lib/systemd/system/basic.target.wants/*;\
rm -f /lib/systemd/system/anaconda.target.wants/*;
VOLUME [ "/sys/fs/cgroup" ]
EXPOSE 11211/tcp 11211/udp
CMD ["/usr/bin/memcached"]
#CMD ["/usr/bin/memcached -u root"]
#CMD ["/usr/bin/memcached", "-D", "FOREGROUND"]
The container builds successfully, but when I try to run the container using the command
docker run -d -i -t -P <image id>, I cannot see the image inside of the list that is returned with docker ps.
I attempted to have my memcached service run the same way as my httpd container, but I cannot pass in the argument using the -D flag (since its already a daemon im guessing). This is how my httpd CMD was set up -
CMD ["/usr/sbin/httpd", "-D", "FOREGROUND"]
Locally, if I run the command /usr/bin/memcached -u root it runs as a process, but when I try in the container CMD it informs me that it cannot find the specified file (having to do with the -u root section I am guessing).
Setting the CMD to /bin/bash still did not allow the service to start either.
How can I have my memcached service run and allow it to be seen when I run docker ps, so that I can open a bash section inside of it?
Thanks.
memcached will run in the foreground by default, which is what you want. The -d option would run memcached as a daemon which would cause the container to exit immediately.
The Dockerfile looks overly complex, try this
FROM centos:7
RUN yum update -y && yum install -y epel-release && yum install -y memcached && yum clean all
EXPOSE 11211
CMD ["/usr/bin/memcached","-p","11211","-u","memcached","-m","64"]
Then you can do what you need
$ docker build -t me/memcached .
<snipped build>
$ CID=$(docker create me/memcached)
$ docker start $CID
4ac5afed0641f07f4694c30476cef41104f6fd864c174958b971822005fd292a
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4ac5afed0641 me/memcached "/usr/bin/memcached -" About a minute ago Up 4 seconds 11211/tcp jovial_bardeen
$ docker exec $CID ps -ef
UID PID PPID C STIME TTY TIME CMD
memcach+ 1 0 0 01:03 ? 00:00:00 /usr/bin/memcached -p 11211 -u memcached -m 64
root 10 0 2 01:04 ? 00:00:00 ps -ef
$ docker exec -ti $CID bash
[root#4ac5afed0641 /]#
Or skip your Dockerfile if it actually only runs memcached and use:
docker run --name my-memcache -d memcached
At least to get your basic set-up going, and then you can update that official image as needed.

Docker CMD doesn't see installed components

I am trying to build a docker image using the following docker file.
FROM ubuntu:latest
# Replace shell with bash so we can source files
RUN rm /bin/sh && ln -s /bin/bash /bin/sh
# Update packages
RUN apt-get -y update && apt-get install -y \
curl \
build-essential \
libssl-dev \
git \
&& rm -rf /var/lib/apt/lists/*
ENV APP_NAME testapp
ENV NODE_VERSION 5.10
ENV SERVE_PORT 8080
ENV LIVE_RELOAD_PORT 8888
# Install nvm, node, and angular
RUN (curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.31.1/install.sh | bash -) \
&& source /root/.nvm/nvm.sh \
&& nvm install $NODE_VERSION \
&& npm install -g angular-cli \
&& ng new $APP_NAME \
&& cd $APP_NAME \
&& npm run postinstall
EXPOSE $SERVE_PORT $LIVE_RELOAD_PORT
WORKDIR $APP_NAME
EXPOSE 8080
CMD ["node", "-v"]
But I keep getting an error when trying to run it:
docker: Error response from daemon: Container command 'node' not found or does not exist..
I know node is being properly installed because if I rebuild the image by commenting out the CMD line from the docker file
#CMD ["node", "-v"]
And then start a shell session
docker run -it testimage
I can see that all my dependencies are there and return proper results
node -v
v5.10.1
.....
ng -v
angular-cli: 1.0.0-beta.5
node: 5.10.1
os: linux x64
So my question is. Why is the CMD in Dockerfile not able to run these and how can I fix it?
When using the shell to RUN node via nvm, you have sourced the nvm.sh file and it will have a $PATH variable set in it's environment to search for executable files via nvm.
When you run commands via docker run it will only inject a default PATH
docker run <your-ubuntu-image> echo $PATH
docker run <your-ubuntu-image> which node
docker run <your-ubuntu-image> nvm which node
Specifying a CMD with an array execs a binary directly without a shell or a $PATH to lookup.
Provide the full path to your node binary.
CMD ["/bin/node","-v"]
It's better to use the node binary rather than the nvm helper scripts due to the way dockers signal processing works. It might be easier to use the node apt packages in docker rather than nvm.

Resources