The following docker image successfully builds but will just not run - without any (visible) error:
FROM alpine:3
RUN apk --no-cache add transmission-cli transmission-daemon \
&& mkdir -p /transmission/config \
&& chmod -R 777 /transmission
RUN apk add --no-cache --upgrade bash
RUN wget -P ./transmission/ https://speed.hetzner.de/100MB.bin
ENV TRACKER_URL="http://104.219.73.18:6969/announce"
RUN echo tracker url is: ${TRACKER_URL}
RUN transmission-create -t ${TRACKER_URL} -o ./transmission/testfile.torrent /transmission/100MB.bin
CMD transmission-daemon -c ./transmission --config-dir ./transmission/
The logs show nothing for me. What's strange: If I build the image, run a shell inside it and execute the CMD, everything runs just fine. I've already tried wrapping the CMD in a bash script and executed that (because why not), but with the same result.
What am I missing?
If you run tranmission-daemon yourself on the command line, you'll see that it puts itself into the background. From Docker's perspective, your container just exited!
You need to add the -f (--foreground) option to your tranmission-daemon command line:
CMD transmission-daemon -f -c ./transmission --config-dir ./transmission/
Related
I am trying to run Jmeter in Docker. I got Dockerfile and Entrypoint has entrypoint.sh as well added.
ARG JMETER_VERSION="5.2.1"
RUN mkdir /jmeter
WORKDIR /jmeter
RUN apt-get update \
&& apt-get install wget -y \
&& apt-get install openjdk-8-jdk -y \
&& wget https://archive.apache.org/dist/jmeter/binaries/apache-jmeter-5.2.1.tgz \
&& tar -xzf apache-jmeter-5.2.1.tgz \
&& rm apache-jmeter-5.2.1.tgz
ENV JAVA_HOME /usr/lib/jvm/java-8-openjdk-amd64
RUN export JAVA_HOME
RUN echo $JAVA_HOME
ENV JMETER jmeter/apache-jmeter-5.2.1/bin
ENV PATH $PATH:$JMETER_BIN
RUN export JMETER
RUN echo $JMETER
WORKDIR /jmeter/apache-jmeter-5.2.1
COPY users.jmx /jmeter/apache-jmeter-5.2.1
COPY entrypoint.sh /jmeter/apache-jmeter-5.2.1
RUN ["chmod", "+x", "entrypoint.sh"]
ENTRYPOINT ["./entrypoint.sh"]
entrypoint.sh
#!/bin/bash
# Inspired from https://github.com/hhcordero/docker-jmeter-client
# Basically runs jmeter, assuming the PATH is set to point to JMeter bin-dir (see Dockerfile)
#
# This script expects the standdard JMeter command parameters.
#
set -e
freeMem=`awk '/MemFree/ { print int($2/1024) }' /proc/meminfo`
s=$(($freeMem/10*8))
x=$(($freeMem/10*8))
n=$(($freeMem/10*2))
export JVM_ARGS="-Xmn${n}m -Xms${s}m -Xmx${x}m"
echo "START Running Jmeter on `date`"
echo "JVM_ARGS=${JVM_ARGS}"
echo "jmeter args=$#"
# Keep entrypoint simple: we must pass the standard JMeter arguments
bin/jmeter.sh $#
echo "END Running Jmeter on `date`"
Now when I try to run container without jmeter arguments, container starts and asks for jmeter arguments
docker run sar/test12
I get error as An error occurred:
No X11 DISPLAY variable was set, but this program performed an operation which requires it.
But when i run jmeter container with arguments
docker run -v /home/jmeter/unbuntjmeter/:/jmeter/apache-jmeter-5.2.1 sar/test12 -n -t ./users.jmx
docker: Error response from daemon: OCI runtime create failed: container_linux.go:349: starting container process caused "exec: "./entrypoint.sh": permission denied": unknown.
Solutions
For the X11 issue, you can try setting -e DISPLAY=$DISPLAY in your docker run, you may need to perform some other steps to get it working properly depending on how your host is setup. But trying to get the GUI working here seems like overkill. To fix your problem when you pass through the command arguments, you can either:
Add execute permissions to the entrypoint.sh file on your host by running chmod +x /home/jmeter/unbuntjmeter/entrypoint.sh.
Or
Don't mount /home/jmeter/unbuntjmeter/ into the container by removing the -v argument from your docker run command.
Problem
When you run this command docker run -v /home/jmeter/unbuntjmeter/:/jmeter/apache-jmeter-5.2.1 sar/test12 -n -t ./users.jmx, you are mounting the directory /home/jmeter/unbuntjmeter/ from your host machine onto /jmeter/apache-jmeter-5.2.1 in your docker container.
That means your /jmeter/apache-jmeter-5.2.1/entrypoint.sh script in the container is being overwritten by the one in that directory on your host (if there is one, which there does seem to be). This file on your host machine doesn't have the proper permissions to be executed in your container (presumably it just needs +x because you are running this in your build: RUN ["chmod", "+x", "entrypoint.sh"]).
I want to run an OpenLDAP server in a docker container using CentOS7.
I managed to have a container running with an openldap installed in it. My problem is that I am using a script entrypoint.sh to start the slapd service and add a user to my directory. I would like this two steps to be in the Dockerfile so that the password to perform ldapadd is not stored in the script.
So far I have only found examples on debian .
https://github.com/kanboard/docker-openldap/blob/master/memberUid/Dockerfile this is what I would like to do but using CentOS 7.
I tried start slapd service in my Dockerfile without success.
My Dockerfile looks like this :
FROM centos:7
RUN yum -y update && yum -y install \
openldap-servers \
openldap-clients \
libselinux-python \
openssl \
; yum clean all
RUN chown ldap:ldap -R /var/lib/ldap
COPY slapd.conf /etc/openldap/slapd.conf
COPY base.ldif /etc/openldap/schema/base.ldif
COPY entrypoint.sh /entrypoint.sh
RUN chmod 500 /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
My entrypoint.sh script looks like this :
#!/bin/bash
exec /usr/sbin/slapd -f /etc/openldap/slapd.conf -h "ldapi:/// ldap:///" -d stats &
sleep 10
ldapadd -x -w mypassword -D "cn=ldapadm,dc=mydomain" -f /etc/openldap/schema/base.ldif
This does work however I am looking to start the ldap service and do the ldapadd command in the Dockerfile not to have mypassword stored in entrypoint.sh.
Hence I tried these commands :
RUN systemctl slapd start
RUN ldapadd -x -w password -D "cn=ldapadm,dc=mydomain" -f /etc/openldap/schema/base.ldif
Of course this does not work as systemctl does not work in Dockerfile. What is the best alternative ? I was considering having one container starting the ldap servcie but then I do not know how to call it to build the image of the other container...
EDIT :
Thanks to Guido U. Draheim, I have an alternative to systemctl to start slapd service.
My Dockerfile now looks like this :
FROM centos:7
RUN yum -y update && yum -y install \
openldap-servers \
openldap-clients \
libselinux-python \
openssl \
; yum clean all
RUN chown ldap:ldap -R /var/lib/ldap
COPY slapd.conf /etc/openldap/slapd.conf
COPY base.ldif /etc/openldap/schema/base.ldif
COPY files/docker/systemctl.py /usr/bin/systemctl
RUN systemctl enable slapd
RUN systemctl start slapd;\
ldapdd -x -w password -D "cn=ldapadm,dc=sblanche" -f /etc/openldap/schema/base.ldif
COPY entrypoint.sh /entrypoint.sh
RUN chmod 500 /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
But I have got the following error : ldap_bind: Invalid credentials (49)
(a) you could use the docker-systemctl-replacement to run your "systemctl.py start slapd". Which is the obvious first error.
(b) each RUN in a dockerfile is a new container, so the running process from the earlier invocation can not survive anyway. That's why the referenced dockerfile example has it combined with "&&".
And yeah (c) I am using an openldap centos container. So go ahead, try again.
I have created Docker image based on your project but docker run throws Error: Could not find or load main class org.gradle.wrapper.GradleWrapperMain.Here is dockerfile if you want to debug and run and help me in fixing this error
FROM OpenJDK:8-JDK-alpine
WORKDIR /opt
RUN wget -q https://services.gradle.org/distributions/gradle-3.3-bin.zip
&& unzip gradle-3.3-bin.zip -d /opt
&& rm gradle-3.3-bin.zip
RUN echo "$PWD"
RUN apk add git
RUN git clone https://github.com/TechieTester/gatling-fundamentals.git
RUN echo "$PWD"
RUN cp -vif /opt/gatling-fundamentals/gradlew /opt/gradle-3.3/bin/
RUN mv -vif /opt/gatling-fundamentals/src/* /opt/gradle-3.3/bin/
RUN find /opt/
RUN chmod 777 /opt/gradle-3.3/bin/gradlew
ENV GRADLE_HOME /opt/gradle-3.3
ENV PATH $PATH:/opt/gradle-3.3/bin
Once docker image created successfully using below command locally
docker build -t fromscratch4:local .
try to run with below command
Mind you i have given full access to gradlew using
chmod 777 gradlew
You will get an error saying below...please help
PS C:\Gatling2\gatling6games> docker run --rm -w /opt/gatling-fundamentals/
fromscratch4:local sh -c "gradle wrapper | gradlew gatlingRun
simulations.RuntimeParameters"
Error: Could not find or load main class
org.gradle.wrapper.GradleWrapperMain
Response of #MatthewLDaniel worked
I am creating a Docker image to initialize my PostgreSQL database. It looks like this:
FROM debian:stretc
RUN set -x \
&& ... ommitted ...
&& apt-get install postgresql-client -y
COPY scripts /scripts
CMD cd /scripts && psql -f myscript.sql
This works great. Every time I need to initialize my database I start the container (docker run --rm my-image). After the psql command is done, the container is automatically stopped and removed (because of the --rm). So basically, I have a Docker-image-as-executable.
But, I am confused whether that last line should be:
CMD cd /scripts && psql -f myscript.sql
or
ENTRYPOINT cd /scripts && psql -f myscript.sql
Which one should be used in my case (Docker-image-as-excutable)? Why?
You need to use ENTRYPOINT if you need to make it as "Docker-image-as-executable"
RUN executes the command(s) that you give in a new layer and creates
a new image. This is mainly used for installing a new package.
CMD sets default command and/or parameters, however, we can overwrite
those commands or pass in and bypass the default parameters from
the command line when docker runs
ENTRYPOINT is used when yo want to run a container as an executable.
Both Entrypoint and Command will do the same thing. The only main difference is that when you use CMD you have more flexibilty in overriding the command that runs from the CLI.
so If you have the dockerfile:
FROM debian:stretc
RUN set -x \
&& ... ommitted ...
&& apt-get install postgresql-client -y
COPY scripts /scripts
CMD cd /scripts && psql -f myscript.sql
You can override the CMD defined in the dockerfile from the cli to run a different command:
docker run --rm my-image psql -f MYSCRIPT2.sql
This will run MYSCRIPT2.sql as given in the cli. You can't do that with ENRTYPOINT.
I'd like to run a script to attach a network drive every time I create a container in Docker. From what I've read this should be possible by setting a custom entrypoint. Here's what I have so far:
FROM ubuntu
COPY *.py /opt/package/my_code
RUN mkdir /efs && \
apt-get install nfs-common -y && \
echo "#!/bin/sh" > /root/startup.sh && \
echo "mount -t nfs4 -o net.nfs.com:/ /nfs" >> /root/startup.sh && \
echo "/bin/sh -c '$1'" >> /root/startup.sh && \
chmod +x /root/startup.sh
WORKDIR /opt/package
ENV PYTHONPATH /opt/package
ENTRYPOINT ["/root/startup.sh"]
At the moment my CMD is not getting passed through properly to my /bin/sh line, but I'm wondering if there isn't an easier way to accomplish this?
Unfortunately I don't have control over how my containers will be created. This means I can't simply prepend the network mounting command to the original docker command.
From documentation:
CMD should be used as a way of defining default arguments for an ENTRYPOINT command or for executing an ad-hoc command in a container
So if you have an ENTRYPOINT specified, the CMD will be passed as additional arguments for it. It means that your entrypoint script should explicitly handle these arguments.
In your case, when you run :
docker run yourimage yourcommand
What is executed in your container is :
/root/startup.sh yourcommand
The solution is to add exec "$#" at the end of your /root/startup.sh script. This way, it will execute any command given as its arguments.
You might want to read about the ENTRYPOINT mechanisms and its interaction with CMD.