Enable and Start Queue in Dockerfile - docker

I need some help on how to configure Dockerfile so my queue works as expected. I already tried to run this manually inside container, so I need this to run Automatically after each deploy in gitlab. Here's what I did manually inside the container
sudo service supervisor enable
sudo service supervisor start
ps aux | grep artisan
It works perfectly, but with Dockefile, I need it to run those commands. Heres my excerpt of the Dockefile
COPY gal-worker /etc/supervisor/conf.d/gal-worker.conf
COPY gal-schedule /etc/supervisor/conf.d/gal-schedule.conf
RUN chown -R root:root /etc/supervisor/conf.d/*.conf
# Make sure Supervisor comes up after a reboot.
RUN sudo service supervisor enable
# Bring Supervisor up right now.
RUN sudo service supervisor start
But my pipeline cant succeed due to errors:
Step 33/34 : RUN sudo service supervisor enable
---> Running in c04c3ab807d2
Usage: /etc/init.d/supervisord {start|stop|restart|force-reload|status}
The command '/bin/sh -c sudo service supervisor enable' returned a non-zero code: 1
Cleaning up file based variables
00:01
ERROR: Job failed: exit code 1
Any ideas ?

Related

Docker container does not run crontab

I have a dockerfile image based on ubuntu. Iam trying to make a bash script run each day but the cron never runs. When the container is running, i check if cron is running and it is. the bash script works perfectly and the crontab command is well copied inside the container. i can't seem to find where the problem is coming from.
Here is the Dockerfile:
FROM snipe/snipe-it:latest
ENV TZ=America/Toronto
RUN apt-get update \
&& apt-get install awscli -y \
&& apt-get clean \
&& apt-get install cron -y \
&& rm -rf /var/lib/apt/lists/*
RUN mkdir /var/www/html/backups_scripts /var/www/html/config/scripts
COPY config/crontab.txt /var/www/html/backups_scripts
RUN /usr/bin/crontab /var/www/html/backups_scripts/crontab.txt
COPY config/scripts/backups.sh /var/www/html/config/scripts
CMD ["cron","-f"]
The last command CMD doesn't work. And as soon as i remove the cmd command i get this message when i check the cron task inside the container:
root#fcfb6052274a:/var/www/html# /etc/init.d/cron status
* cron is not running
Even if i start the cron process before the crontab, the crontab is still not launched
This dockerfile is called by a docker swarm file (compose file type). Maybe the cron must be activated with the compose file.
How can i tackle this problem ??? Thank you
You need to approach this differently, as you have to remember that container images and containers are not virtual machines. They're a single process that starts and is maintained through its lifecycle. As such, background processes (like cron) don't exist in a container.
What I've seen most people do is have the container just execute whatever you're looking for it to do on a job like do_the_thing.sh and then using the docker run command on on the host machine to call it via cron.
So for sake of argument, let's say you had an image called myrepo/task with a default entrypoint of do_the_thing.sh
On the host, you could add an entry to crontab:
# m h dom mon dow user command
0 */2 * * * root docker run --rm myrepo/task
Then it's down to a question of design. If the task needs files, you can pass them down via volume. If it needs to put something somewhere when it's done, maybe look at blob storage.
I think this question is a duplicate, with a detailed response with lots of upvotes here. I followed the top-most dockerfile example without issues.
Your CMD running cron in the foreground isn't the problem. I ran a quick version of your docker file and exec'ing into the container I could confirm cron was running. Recommend checking how your cron entries in the crontab file are re-directing their output.
Expanding on one of the other answers here a container is actually a lot like a virtual machine, and often they do run many processes concurrently. If you happen to have any other containers running you might be able to see this most easily by running docker stats and looking at the PID column.
Also, easy to examine interactively yourself like this:
$ # Create a simple ubuntu running container named my-ubuntu
$ docker run -it -h my-ubuntu ubuntu
root#my-ubuntu$ ps aw # Shows bash and ps processes running.
root#my-ubuntu$ # Launch a ten minute sleep in the background.
root#my-ubuntu$ sleep 600 &
root#my-ubuntu$ ps aw # Now shows sleep also running.

Is there any way to run "pkexec" from a docker container?

I am trying to set up a Docker image (my Dockerfile is available here, sorry for the french README: https://framagit.org/Gwendal/firefox-icedtea-docker) with an old version of Firefox and an old version of Java to run an old Java applet to start a VPN. My image does work and successfully allows me to start the Java applet in Firefox.
Unfortunately, the said applet then tries to run the following command in the container (I've simply removed the --config part from the command as it does not matter here):
INFO: launching '/usr/bin/pkexec sh -c /usr/sbin/openvpn --config ...'
Then the applet exits silently with an error. While investigating, I've tried running a command with pkexec with the same Docker image, and it gives me this result:
$ sudo docker-compose run firefox pkexec /firefox/firefox-sdk/bin/firefox-bin -new-instance
**
ERROR:pkexec.c:719:main: assertion failed: (polkit_unix_process_get_start_time (POLKIT_UNIX_PROCESS (subject)) > 0)
But I don't know polkit at all and cannot understand this error.
EDIT: A more minimal way to reproduce the problem is with this Dockerfile:
FROM ubuntu:16.04
RUN apt-get update \
&& apt-get install -y policykit-1
And then run:
$ sudo docker build -t pkexec-test .
$ sudo docker run pkexec-test pkexec echo Hello
Which leads here again to:
ERROR:pkexec.c:719:main: assertion failed: (polkit_unix_process_get_start_time (POLKIT_UNIX_PROCESS (subject)) > 0)
Should I conclude that pkexec cannot work in a docker container? Or is there any way to make this command work?
Sidenote: I have no control whatsoever on the Java applet that I try to run, it is a horrible and very dated proprietary black box that I am supposed to use at work, for which I have no access to the source code, and that I must use as is.
I have solved my own problem by replacing pkexec by sudo in the docker image, and by allowing passwordless sudo.
Given an ubuntu docker image where a user called developer was created and configured with a USER statement, add these lines:
# Install sudo and make 'developer' a passwordless sudoer
RUN apt-get install sudo
ADD ./developersudo /etc/sudoers.d/developersudo
# Replacing pkexec by sudo
RUN rm /usr/bin/pkexec
RUN ln -s /usr/bin/sudo /usr/bin/pkexec
with the file developersudo containing:
developer ALL=(ALL) NOPASSWD:ALL
This replaces any call to pkexec made in a process running in the container, by a call to sudo without any password prompt, which works nicely.

Start Node manager in Weblogic (Docker) using script.

I tried to dockerize weblogic server. Now I am facing a issue with Starting node manager after server is started inside the docker container. My docker file as below.
FROM oracle/weblogic:12.1.3-generic
ENV JAVA_OPTIONS="${JAVA_OPTIONS} -
Dweblogic.nodemanager.SecureListener=false" \
ADMIN_PORT="7001" \
ADMIN_HOST="localhost"
USER oracle
COPY dockerfiles/keyStore/keystore_ss.jks /u01/oracle/keystore/
COPY dockerfiles/patch/* /u01/oracle/patch/
COPY dockerfiles/local_domainScripts /u01/oracle/local_domainScripts/
COPY dockerfiles/scripts/* /u01/oracle/
COPY dockerfiles/applicationFiles/ /u01/oracle/applicationFiles/
USER root
RUN yum install -y procps
RUN chmod +x startWeblogic.sh
USER oracle
RUN /u01/oracle/wlst /u01/oracle/local_domainScripts/config.py
RUN nohup bash -c "/u01/oracle/user_projects/domains/local_domain/bin/startNodeManager.sh &" && sleep 4
CMD ["/u01/oracle/user_projects/domains/local_domain/startWebLogic.sh"]
This will create weblogic server instance. I want to start node manager after this server is started.
Run command:
docker run -d --name wls_local_domain --network=host --hostname localhost -p 7001:7001 test-docker:0-SNAPSHOT
When ./startNodeManager.sh is executed inside the container that will start the node manager. To start the node manager, weblogic server need to be started first.
I want to this using bash script. I tried this one but it didn't help
github link
You can't (usefully) RUN a background process. That Dockerfile command launches an intermediate container executing the RUN command, saves its filesystem, and exits; there is no process running any more by the time the next Dockerfile command executes.
If this is a commercially maintained image, you might look into whether Oracle has intstructions on how to use it. (From clicking around, none of the samples there start a node manager; is it necessary?)
Best practice is generally to run only one server in a Docker container (and ideally in the foreground and as the container's main process). If that will work and there aren't shared filesystem dependencies, you can split all of this except the final CMD into one base Dockerfile, then have two additional Dockerfiles that just have a FROM line pointing at your mostly-built image and a requested CMD.
If that really won't work then you'll have to fall back to running some init system in your container, typically supervisord.
You need to start the node manager as a background process then start the server. In order to keep alive the docker container while you are running the background processes, you can use the tail command.
This is how I start the node managed and the WebLogic server in my container:
#!/bin/bash
# ------------------------------------------------------------------------------
# start the Node Manager
# ------------------------------------------------------------------------------
function startNodeManager() {
echo "starting the node manager for $MANAGED_SERVER_NAME server..."
"$ORACLE_HOME/user_projects/domains/$DOMAIN_NAME/bin/startNodeManager.sh" &
while ! nc -z "$HOSTNAME" "$NODE_MANAGER_PORT"; do
sleep 0.5
done
echo "node manager is up and ready to receive requests"
}
# ------------------------------------------------------------------------------
# start the WebLogic Admin server
# ------------------------------------------------------------------------------
function startAdminServer() {
echo "starting the $ADMIN_SERVER_NAME server..."
local logHome
logHome="$ORACLE_HOME/user_projects/domains/$DOMAIN_NAME/servers/$ADMIN_SERVER_NAME/logs"
mkdir -p "$logHome"
"$ORACLE_HOME/user_projects/domains/$DOMAIN_NAME/bin/startWebLogic.sh" > "$logHome/$ADMIN_SERVER_NAME.out" 2>&1 &
}
# ------------------------------------------------------------------------------
# main app starts here
# ------------------------------------------------------------------------------
startNodeManager
startAdminServer
# this command keeps alive the docker container
tail -F \
"$ORACLE_HOME/user_projects/domains/$DOMAIN_NAME/servers/$ADMIN_SERVER_NAME/logs/$ADMIN_SERVER_NAME.log" \
"$ORACLE_HOME/user_projects/domains/$DOMAIN_NAME/servers/$ADMIN_SERVER_NAME/logs/$ADMIN_SERVER_NAME.nohup" \
"$ORACLE_HOME/user_projects/domains/$DOMAIN_NAME/servers/$ADMIN_SERVER_NAME/logs/$ADMIN_SERVER_NAME.out"
This is a complete startup script that you can use as an example and improve it. It starts the node manager and the admin server: https://github.com/zappee/docker-images/blob/master/oracle-weblogic/oracle-weblogic-12.2.1.4-admin-server/container-scripts/startup.sh
From here you can download the complete working solution.

How can I run script automatically after Docker container startup

I'm using Search Guard plugin to secure an elasticsearch cluster composed of multiple nodes.
Here is my Dockerfile:
#!/bin/sh
FROM docker.elastic.co/elasticsearch/elasticsearch:5.6.3
USER root
# Install search guard
RUN bin/elasticsearch-plugin install --batch com.floragunn:search-guard-5:5.6.3-16 \
&& chmod +x \
plugins/search-guard-5/tools/hash.sh \
plugins/search-guard-5/tools/sgadmin.sh \
bin/init_sg.sh \
&& chown -R elasticsearch:elasticsearch /usr/share/elasticsearch
USER elasticsearch
To initialize SearchGuard (create internal users and assign roles). I need to run the script init_sg.sh after the container startup.
Here is the problem: Unless elasticsearch is running, the script will not initialize any security index.
The script's content is :
sleep 10
plugins/search-guard-5/tools/sgadmin.sh -cd config/ -ts config/truststore.jks -ks config/kirk-keystore.jks -nhnv -icl
Now, I just run the script manually after the container startup but since I'm running it on Kubernetes.. Pods may get killed or fail and get recreated automatically for some reason. In this case, the plugin have to be initialized automatically after the container startup!
So how to accomplish this? Any help or hint would be really appreciated.
The image itself has an entrypoint ENTRYPOINT ["/run/entrypoint.sh"] specified in the Dockerfile. You can replace it by your own script. So for example create a new script, mount it and first call /run/entrypoint.sh and then wait for start of elasticsearch before running your init_sg.sh.
Not sure this will solves your problem, but its worth check my repo'sDockerfile
I have created a simple run.sh file copied to docker image and in the Dockerfile I wrote CMD ["run.sh"]. In the same way define whatever you want in run.sh and write CMD ["run.sh"]. You can find another example like below
Dockerfile
FROM java:8
RUN apt-get update && apt-get install stress-ng -y
ADD target/restapp.jar /restapp.jar
COPY dockerrun.sh /usr/local/bin/dockerrun.sh
RUN chmod +x /usr/local/bin/dockerrun.sh
CMD ["dockerrun.sh"]
dockerrun.sh
#!/bin/sh
java -Dserver.port=8095 -jar /restapp.jar &
hostname="hostname: `hostname`"
nohup stress-ng --vm 4 &
while true; do
sleep 1000
done
This is addressed in the documentation here: https://docs.docker.com/config/containers/multi-service_container/
If one of your processes depends on the main process, then start your helper process FIRST with a script like wait-for-it, then start the main process SECOND and remove the fg %1 line.
#!/bin/bash
# turn on bash's job control
set -m
# Start the primary process and put it in the background
./my_main_process &
# Start the helper process
./my_helper_process
# the my_helper_process might need to know how to wait on the
# primary process to start before it does its work and returns
# now we bring the primary process back into the foreground
# and leave it there
fg %1
I was trying to solve the exact problem. Here's the approach that worked for me.
Create a separate shell script that checks for ES status, and only start initialization of SG when ES is ready:
Shell Script
#!/bin/sh
echo ">>>> Right before SG initialization <<<<"
# use while loop to check if elasticsearch is running
while true
do
netstat -uplnt | grep :9300 | grep LISTEN > /dev/null
verifier=$?
if [ 0 = $verifier ]
then
echo "Running search guard plugin initialization"
/elasticsearch/plugins/search-guard-6/tools/sgadmin.sh -h 0.0.0.0 -cd plugins/search-guard-6/sgconfig -icl -key config/client.key -cert config/client.pem -cacert config/root-ca.pem -nhnv
break
else
echo "ES is not running yet"
sleep 5
fi
done
Install script in Dockerfile
You will need to install the script in container so it's accessible after it starts.
COPY sginit.sh /
RUN chmod +x /sginit.sh
Update entrypoint script
You will need to edit the entrypoint script or run script of your ES image. So that it starts the sginit.sh in the background BEFORE starting ES process.
# Run sginit in background waiting for ES to start
/sginit.sh &
This way the sginit.sh will start in the background, and will only initialize SG after ES is started.
The reason to have this sginit.sh script starts before ES in the background is so that it's not blocking ES from starting. The same logic applies if you put it after starting of ES, it will never run unless you put the starting of ES in the background.
I would suggest to put the CMD in you docker file to execute the script when the container start
FROM debian
RUN apt-get update && apt-get install -y nano && apt-get clean
EXPOSE 8484
CMD ["/bin/bash", "/opt/your_app/init.sh"]
There is other way , but before using this look at your requirement,
ENTRYPOINT "put your code here" && /bin/bash
#exemple ENTRYPOINT service nginx start && service ssh start &&/bin/bash "use && to separate your code"
You can also use wait-for-it script. It will wait on the availability of a host and TCP port. It is useful for synchronizing the spin-up of interdependent services and works like a charm with containers. It does not have any external dependencies so you can just run it as an RUN command without doing anything else.
A Dockerfile example based on this thread:
FROM elasticsearch
# Make elasticsearch write data to a folder that is not declared as a volume in elasticsearchs' official dockerfile.
RUN mkdir /data && chown -R elasticsearch:elasticsearch /data && echo 'es.path.data: /data' >> config/elasticsearch.yml && echo 'path.data: /data' >> config/elasticsearch.yml
# Download wait-for-it
ADD https://raw.githubusercontent.com/vishnubob/wait-for-it/e1f115e4ca285c3c24e847c4dd4be955e0ed51c2/wait-for-it.sh /utils/wait-for-it.sh
# Copy the files you may need and your insert script
# Insert data into elasticsearch
RUN /docker-entrypoint.sh elasticsearch -p /tmp/epid & /bin/bash /utils/wait-for-it.sh -t 0 localhost:9200 -- path/to/insert/script.sh; kill $(cat /tmp/epid) && wait $(cat /tmp/epid); exit 0;

Run command in Docker Container only on the first start

I have a Docker Image which uses a Script (/bin/bash /init.sh) as Entrypoint. I would like to execute this script only on the first start of a container. It should be omitted when the containers is restarted or started again after a crash of the docker daemon.
Is there any way to do this with docker itself, or do if have to implement some kind of check in the script?
I had the same issue, here a simple procedure (i.e. workaround) to solve it:
Step 1:
Create a "myStartupScript.sh" script that contains this code:
CONTAINER_ALREADY_STARTED="CONTAINER_ALREADY_STARTED_PLACEHOLDER"
if [ ! -e $CONTAINER_ALREADY_STARTED ]; then
touch $CONTAINER_ALREADY_STARTED
echo "-- First container startup --"
# YOUR_JUST_ONCE_LOGIC_HERE
else
echo "-- Not first container startup --"
fi
Step 2:
Replace the line "# YOUR_JUST_ONCE_LOGIC_HERE" with the code you want to be executed only the first time the container is started
Step 3:
Set the scritpt as entrypoint of your Dockerfile:
ENTRYPOINT ["/myStartupScript.sh"]
In summary, the logic is quite simple, it checks if a specific file is present in the filesystem; if not, it creates it and executes your just-once code. The next time you start your container the file is in the filesystem so the code is not executed.
The entry point for a docker container tells the docker daemon what to run when you want to "run" that specific container. Let's ask the questions "what the container should run when it's started the second time?" or "what the container should run after being rebooted?"
Probably, what you are doing is following the same approach you do with "old-school" provisioning mechanisms. Your script is "installing" the needed scripts and you will run your app as a systemd/upstart service, right? If you are doing that, you should change that into a more "dockerized" definition.
The entry point for that container should be a script that actually launches your app instead of setting things up. Let's say that you need java installed to be able to run your app. So in the dockerfile you set up the base container to install all the things you need like:
FROM alpine:edge
RUN apk --update upgrade && apk add openjdk8-jre-base
RUN mkdir -p /opt/your_app/ && adduser -HD userapp
ADD target/your_app.jar /opt/your_app/your-app.jar
ADD scripts/init.sh /opt/your_app/init.sh
USER userapp
EXPOSE 8081
CMD ["/bin/bash", "/opt/your_app/init.sh"]
Our containers, at the company I work for, before running the actual app in the init.sh script they fetch the configs from consul (instead of providing a mount point and place the configs inside the host or embedded them into the container). So the script will look something like:
#!/bin/bash
echo "Downloading config from consul..."
confd -onetime -backend consul -node $CONSUL_URL -prefix /cfgs/$CONSUL_APP/$CONSUL_ENV_NAME
echo "Launching your-app..."
java -jar /opt/your_app/your-app.jar
One advice I can give you is (in my really short experience working with containers) treat your containers as if they were stateless once they are provisioned (all the commands you run before the entry point).
I had to do this and I ended up doing a docker run -d which just created a detached container and started bash (in the background) followed by a docker exec, that did the necessary initialization. here's an example
docker run -itd --name=myContainer myImage /bin/bash
docker exec -it myContainer /bin/bash -c /init.sh
Now when I restart my container I can just do
docker start myContainer
docker attach myContainer
This may not be ideal but work fine for me.
I wanted to do the same on windows container. It can be achieved using task scheduler on windows. Linux equivalent for task Scheduler is cron. You can use that in your case. To do this edit the dockerfile and add the following line at the end
WORKDIR /app
COPY myTask.ps1 .
RUN schtasks /Create /TN myTask /SC ONSTART /TR "c:\WINDOWS\system32\WindowsPowerShell\v1.0\powershell.exe C:\app\myTask.ps1" /ru SYSTEM
This Creates a task with name myTask runs it ONSTART and the task its self is to execute a powershell script placed at "c:\app\myTask.ps1".
This myTask.ps1 script will do whatever Initialization you need to do on the container startup. Make sure you delete this task once it is executed successfully or else it will run at every startup. To delete it you can use the following command at the end of myTask.ps1 script.
schtasks /Delete /TN myTask /F

Resources