CentOS7: How to start the slapd service in a docker container? - docker

I want to run an OpenLDAP server in a docker container using CentOS7.
I managed to have a container running with an openldap installed in it. My problem is that I am using a script entrypoint.sh to start the slapd service and add a user to my directory. I would like this two steps to be in the Dockerfile so that the password to perform ldapadd is not stored in the script.
So far I have only found examples on debian .
https://github.com/kanboard/docker-openldap/blob/master/memberUid/Dockerfile this is what I would like to do but using CentOS 7.
I tried start slapd service in my Dockerfile without success.
My Dockerfile looks like this :
FROM centos:7
RUN yum -y update && yum -y install \
openldap-servers \
openldap-clients \
libselinux-python \
openssl \
; yum clean all
RUN chown ldap:ldap -R /var/lib/ldap
COPY slapd.conf /etc/openldap/slapd.conf
COPY base.ldif /etc/openldap/schema/base.ldif
COPY entrypoint.sh /entrypoint.sh
RUN chmod 500 /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
My entrypoint.sh script looks like this :
#!/bin/bash
exec /usr/sbin/slapd -f /etc/openldap/slapd.conf -h "ldapi:/// ldap:///" -d stats &
sleep 10
ldapadd -x -w mypassword -D "cn=ldapadm,dc=mydomain" -f /etc/openldap/schema/base.ldif
This does work however I am looking to start the ldap service and do the ldapadd command in the Dockerfile not to have mypassword stored in entrypoint.sh.
Hence I tried these commands :
RUN systemctl slapd start
RUN ldapadd -x -w password -D "cn=ldapadm,dc=mydomain" -f /etc/openldap/schema/base.ldif
Of course this does not work as systemctl does not work in Dockerfile. What is the best alternative ? I was considering having one container starting the ldap servcie but then I do not know how to call it to build the image of the other container...
EDIT :
Thanks to Guido U. Draheim, I have an alternative to systemctl to start slapd service.
My Dockerfile now looks like this :
FROM centos:7
RUN yum -y update && yum -y install \
openldap-servers \
openldap-clients \
libselinux-python \
openssl \
; yum clean all
RUN chown ldap:ldap -R /var/lib/ldap
COPY slapd.conf /etc/openldap/slapd.conf
COPY base.ldif /etc/openldap/schema/base.ldif
COPY files/docker/systemctl.py /usr/bin/systemctl
RUN systemctl enable slapd
RUN systemctl start slapd;\
ldapdd -x -w password -D "cn=ldapadm,dc=sblanche" -f /etc/openldap/schema/base.ldif
COPY entrypoint.sh /entrypoint.sh
RUN chmod 500 /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
But I have got the following error : ldap_bind: Invalid credentials (49)

(a) you could use the docker-systemctl-replacement to run your "systemctl.py start slapd". Which is the obvious first error.
(b) each RUN in a dockerfile is a new container, so the running process from the earlier invocation can not survive anyway. That's why the referenced dockerfile example has it combined with "&&".
And yeah (c) I am using an openldap centos container. So go ahead, try again.

Related

Docker image silently fails to start

The following docker image successfully builds but will just not run - without any (visible) error:
FROM alpine:3
RUN apk --no-cache add transmission-cli transmission-daemon \
&& mkdir -p /transmission/config \
&& chmod -R 777 /transmission
RUN apk add --no-cache --upgrade bash
RUN wget -P ./transmission/ https://speed.hetzner.de/100MB.bin
ENV TRACKER_URL="http://104.219.73.18:6969/announce"
RUN echo tracker url is: ${TRACKER_URL}
RUN transmission-create -t ${TRACKER_URL} -o ./transmission/testfile.torrent /transmission/100MB.bin
CMD transmission-daemon -c ./transmission --config-dir ./transmission/
The logs show nothing for me. What's strange: If I build the image, run a shell inside it and execute the CMD, everything runs just fine. I've already tried wrapping the CMD in a bash script and executed that (because why not), but with the same result.
What am I missing?
If you run tranmission-daemon yourself on the command line, you'll see that it puts itself into the background. From Docker's perspective, your container just exited!
You need to add the -f (--foreground) option to your tranmission-daemon command line:
CMD transmission-daemon -f -c ./transmission --config-dir ./transmission/

Jmeter in Docker

I am trying to run Jmeter in Docker. I got Dockerfile and Entrypoint has entrypoint.sh as well added.
ARG JMETER_VERSION="5.2.1"
RUN mkdir /jmeter
WORKDIR /jmeter
RUN apt-get update \
&& apt-get install wget -y \
&& apt-get install openjdk-8-jdk -y \
&& wget https://archive.apache.org/dist/jmeter/binaries/apache-jmeter-5.2.1.tgz \
&& tar -xzf apache-jmeter-5.2.1.tgz \
&& rm apache-jmeter-5.2.1.tgz
ENV JAVA_HOME /usr/lib/jvm/java-8-openjdk-amd64
RUN export JAVA_HOME
RUN echo $JAVA_HOME
ENV JMETER jmeter/apache-jmeter-5.2.1/bin
ENV PATH $PATH:$JMETER_BIN
RUN export JMETER
RUN echo $JMETER
WORKDIR /jmeter/apache-jmeter-5.2.1
COPY users.jmx /jmeter/apache-jmeter-5.2.1
COPY entrypoint.sh /jmeter/apache-jmeter-5.2.1
RUN ["chmod", "+x", "entrypoint.sh"]
ENTRYPOINT ["./entrypoint.sh"]
entrypoint.sh
#!/bin/bash
# Inspired from https://github.com/hhcordero/docker-jmeter-client
# Basically runs jmeter, assuming the PATH is set to point to JMeter bin-dir (see Dockerfile)
#
# This script expects the standdard JMeter command parameters.
#
set -e
freeMem=`awk '/MemFree/ { print int($2/1024) }' /proc/meminfo`
s=$(($freeMem/10*8))
x=$(($freeMem/10*8))
n=$(($freeMem/10*2))
export JVM_ARGS="-Xmn${n}m -Xms${s}m -Xmx${x}m"
echo "START Running Jmeter on `date`"
echo "JVM_ARGS=${JVM_ARGS}"
echo "jmeter args=$#"
# Keep entrypoint simple: we must pass the standard JMeter arguments
bin/jmeter.sh $#
echo "END Running Jmeter on `date`"
Now when I try to run container without jmeter arguments, container starts and asks for jmeter arguments
docker run sar/test12
I get error as An error occurred:
No X11 DISPLAY variable was set, but this program performed an operation which requires it.
But when i run jmeter container with arguments
docker run -v /home/jmeter/unbuntjmeter/:/jmeter/apache-jmeter-5.2.1 sar/test12 -n -t ./users.jmx
docker: Error response from daemon: OCI runtime create failed: container_linux.go:349: starting container process caused "exec: "./entrypoint.sh": permission denied": unknown.
Solutions
For the X11 issue, you can try setting -e DISPLAY=$DISPLAY in your docker run, you may need to perform some other steps to get it working properly depending on how your host is setup. But trying to get the GUI working here seems like overkill. To fix your problem when you pass through the command arguments, you can either:
Add execute permissions to the entrypoint.sh file on your host by running chmod +x /home/jmeter/unbuntjmeter/entrypoint.sh.
Or
Don't mount /home/jmeter/unbuntjmeter/ into the container by removing the -v argument from your docker run command.
Problem
When you run this command docker run -v /home/jmeter/unbuntjmeter/:/jmeter/apache-jmeter-5.2.1 sar/test12 -n -t ./users.jmx, you are mounting the directory /home/jmeter/unbuntjmeter/ from your host machine onto /jmeter/apache-jmeter-5.2.1 in your docker container.
That means your /jmeter/apache-jmeter-5.2.1/entrypoint.sh script in the container is being overwritten by the one in that directory on your host (if there is one, which there does seem to be). This file on your host machine doesn't have the proper permissions to be executed in your container (presumably it just needs +x because you are running this in your build: RUN ["chmod", "+x", "entrypoint.sh"]).

How to extend nginx docker image without getting error systemctl: command not found?

I want to build my own custom docker image from nginx image.
I override the ENTRYPOINT of nginx with my own ENTERYPOINT file.
Which bring me to ask two questions:
I think I lose some commands from nginx by doing so. am I right? (like expose the port.. )
If I want to restart the nginx I run this commands: nginx -t && systemctl reload nginx. but the output is:
nginx: configuration file /etc/nginx/nginx.conf test is successful
/entrypoint.sh: line 5: systemctl: command not found
How to fix that?
FROM nginx:latest
WORKDIR /
RUN echo "deb http://ftp.debian.org/debian stretch-backports main" >> /etc/apt/sources.list
RUN apt-get -y update && \
apt-get -y install apt-utils && \
apt-get -y upgrade && \
apt-get -y clean
# I ALSO WANT TO INSTALL CERBOT FOR LATER USE (in my entrypoint file)
RUN apt-get -y install python-certbot-nginx -t stretch-backports
# COPY ./something ./tothisimage
# COPY ./something ./tothisimage
# COPY ./something ./tothisimage
# COPY ./something ./tothisimage
COPY entrypoint.sh /entrypoint.sh
ENTRYPOINT ["bash", "/entrypoint.sh"]
entrypoint.sh
echo "in entrypoint"
# I want to run some commands here...
# After I want to run nginx normally....
nginx -t && systemctl reload nginx
echo "after reload"
this will work using service command:
echo "in entrypoint"
# I want to run some commands here...
# After I want to run nginx normally....
nginx -t && service nginx reload
echo "after reload"
output:
in entrypoint
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
Restarting nginx: nginx.
after reload
Commands like service and systemctl mostly just don't work in Docker, and you should totally ignore them.
At the point where your entrypoint script is running, it is literally the only thing that is running. That means you don't need to restart nginx, because it hasn't started the first time yet. The standard pattern here is to use the entrypoint script to do some first-time setup; it will be passed the actual command to run as arguments, so you need to tell it to run them.
#!/bin/sh
echo "in entrypoint"
# ... do first-time setup ...
# ...then run the command, nginx or otherwise
exec "$#"
(Try running docker run --rm -it myimage /bin/sh. You will get an interactive shell in a new container, but after this first-time setup has happened.)
The one thing you do lose in your Dockerfile is the default CMD from the base image (setting an ENTRYPOINT resets that). You need to add back that CMD:
COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
CMD ["nginx", "-g", "daemon off;"]
You should keep the other settings from the base image, like ENV definitions and EXPOSEd ports.
The "systemctl" command is specific to some SystemD based operating system. But you do not have such a SystemD daemon running on PID 1 - so even if you install those packages it wont work.
You can only check in the nginx.service file which command the "reload" would execute for real. Or have something like the docker-systemctl-replacement script do it for you.

PHP and redis in same docker image

I'm trying to add redis to a php:7.0-apache image, using this Dockerfile:
FROM php:7.0-apache
RUN apt-get update && apt-get -y install build-essential tcl
RUN cd /tmp \
&& curl -O http://download.redis.io/redis-stable.tar.gz \
&& tar xzvf redis-stable.tar.gz \
&& cd redis-stable \
&& make \
&& make install
COPY php.ini /usr/local/etc/php/
COPY public /var/www/html/
RUN chown -R root:www-data /var/www/html
RUN chmod -R 1755 /var/www/html
RUN find /var/www/html -type d -exec chmod 1775 {} +
RUN mkdir -p /var/redis/6379
COPY 6379.conf /etc/redis/6379.conf
COPY redis_6379 /etc/init.d/redis_6379
RUN chmod 777 /etc/init.d/redis_6379
RUN update-rc.d redis_6379 defaults
RUN service apache2 restart
RUN service redis_6379 start
It build and run fines but redis is never started? When I run /bin/bash inside my container and manually input "service redis_6379 start" it works, so I'm assuming my .conf and init.d files are okay.
While I'm aware it'd much easier using docker-compose, I'm specifically trying to avoid having to use it for specific reasons.
There are multiple things wrong here:
Starting processes in dockerfile has no effect. A dockerfile builds an image. The processes need to be started at container construction time. This can be done using an entrypoint can be defined in the dockerfile by using ENTRYPOINT. That entrypoint is typically a script that is executed when an actual container is started.
There is no init process in docker by default. Issuing service calls will fail without further work. If you need to start multiple processes you can look for the docs of the supervisord program.
Running both redis and a webserver in one container is not best practice. For a php application using redis you'd typically have 2 containers - one running redis and one running apache and let them interact via network.
I suggest you read the docker documentation before continuing. All this is described in depth there.
I am agree with #Richard. Use two or more containers according to your needs then --link them, in order to get the things work!

Docker-image-as-executable: should I execute in CMD or ENTRYPOINT?

I am creating a Docker image to initialize my PostgreSQL database. It looks like this:
FROM debian:stretc
RUN set -x \
&& ... ommitted ...
&& apt-get install postgresql-client -y
COPY scripts /scripts
CMD cd /scripts && psql -f myscript.sql
This works great. Every time I need to initialize my database I start the container (docker run --rm my-image). After the psql command is done, the container is automatically stopped and removed (because of the --rm). So basically, I have a Docker-image-as-executable.
But, I am confused whether that last line should be:
CMD cd /scripts && psql -f myscript.sql
or
ENTRYPOINT cd /scripts && psql -f myscript.sql
Which one should be used in my case (Docker-image-as-excutable)? Why?
You need to use ENTRYPOINT if you need to make it as "Docker-image-as-executable"
RUN executes the command(s) that you give in a new layer and creates
a new image. This is mainly used for installing a new package.
CMD sets default command and/or parameters, however, we can overwrite
those commands or pass in and bypass the default parameters from
the command line when docker runs
ENTRYPOINT is used when yo want to run a container as an executable.
Both Entrypoint and Command will do the same thing. The only main difference is that when you use CMD you have more flexibilty in overriding the command that runs from the CLI.
so If you have the dockerfile:
FROM debian:stretc
RUN set -x \
&& ... ommitted ...
&& apt-get install postgresql-client -y
COPY scripts /scripts
CMD cd /scripts && psql -f myscript.sql
You can override the CMD defined in the dockerfile from the cli to run a different command:
docker run --rm my-image psql -f MYSCRIPT2.sql
This will run MYSCRIPT2.sql as given in the cli. You can't do that with ENRTYPOINT.

Resources