I have a simple docker image running on ubuntu 16.04 based on a dockerfile which CMD is "/sbin/ejabberdctl foreground". to keep the docker container alive once it started I used to run ejabberd server in foreground. However after starting the container and /sbin/ejabberdctl I need to execute another command once ejabberdctl is already running (e.g. ejabberdctl list_cluster).
Tried to add both commands to bash script, but it doesn't work. tried to run /sbin/ejbberdctl start &, it didn't work either.
Which way to dig?
Option A:
Create a simple bash script that runs container and list_cluster with out modifying entry point of ejabberd docker image.
#!/bin/bash
if [ "${1}" = "remove_old" ]; then
echo "removing old ejabberd container"
docker rm -f ejabberd
fi
docker run --rm --name ejabberd -d -p 5222:5222 ejabberd/ecs
sleep 5
echo -e "*******list_cluster******"
docker exec -it ejabberd ash -c "/home/ejabberd/bin/ejabberdctl list_cluster"
Option B
In option B you need to modify ejabberd official image entry point as it does not allow you to run multiple scripts on bootup. So add your script on boot up while a bit modification.
https://github.com/processone/docker-ejabberd/blob/master/ecs/Dockerfile
I will suggest using an official alpine image of 30 MB only of ejabberd instead of Ubuntu.
https://hub.docker.com/r/ejabberd/ecs/
The demo is here can be modified for Ubuntu too but this is tested against the alpine ejabberd official image.
Use ejabberd official image as a base image and ENV MASTER_NODE=ejabberd#ec2-10.0.0.1 is for the master node if you are interested in a cluster.
From ejabberd/ecs:latest
USER root
RUN whoami
COPY supervisord.conf /etc/supervisord.conf
RUN apk add supervisor
RUN mkdir -p /etc/supervisord.d
COPY pm2.conf /etc/supervisord.d/ejabberd.conf
COPY start.sh /opt/ejabberd/start.sh
RUN chmod +x /opt/ejabberd/start.sh
ENV MASTER_NODE=ejabberd#ec2-10.0.0.1
ENTRYPOINT ["supervisord", "--nodaemon", "--configuration", "/etc/supervisord.conf"]
Now create supervisors config file
[unix_http_server]
file = /tmp/supervisor.sock
chmod = 0777
chown= nobody:nogroup
[supervisord]
logfile = /tmp/supervisord.log
logfile_maxbytes = 50MB
logfile_backups=10
loglevel = info
pidfile = /tmp/supervisord.pid
nodaemon = true
umask = 022
identifier = supervisor
[supervisorctl]
serverurl = unix:///tmp/supervisor.sock
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
[include]
files = /etc/supervisord.d/*.conf
Now create ejabberd.conf to start ejabberd using supervisorsd. Note here join cluster argument is used to join cluster if you want to join the cluster. remove it if not needed.
[supervisord]
nodaemon=true
[program:list_cluster]
command: /opt/ejabberd/start.sh join_cluster
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
[program:ejabberd]
command=/home/ejabberd/bin/ejabberdctl foreground
autostart=true
priority=1
autorestart=true
username=ejabberd
exitcodes=0 , 4
A /opt/ejabberd/start.sh bash script that will list_cluster once ejabberd is up and also capable to join_cluster if an argument is passed while calling the script.
#!/bin/sh
until nc -vzw 2 localhost 5222; do sleep 2 ;echo -e "Ejabberd is booting....."; done
if [ $? -eq 0 ]; then
########## Once ejabberd is up then list the cluster ##########
echo -e "***************List_Cluster start***********"
/home/ejabberd/bin/ejabberdctl list_cluster
echo -e "***************List_Cluster End***********"
########## If you want to join cluster once up as pass the master node as ENV then pass first param like join_cluster ##########
if [ "${1}" == "join_cluster" ]; then
echo -e "***************Joining_Cluster start***********"
/home/ejabberd/bin/ejabberdctl join_cluster ejabberd#$MASTER_NODE
echo -e "***************Joining_Cluster End***********"
fi
else
echo -e "**********Ejabberd is down************";
fi
Run docker container
docker build -t ejabberd .
docker run --name ejabberd --rm -it ejabberd
What you're searching for it's called supervisord. In the official docker documentation you can find some examples of how to use it.
Be aware that running multiple services in the same container it's discouraged unless strictly necessary, though.
Related
I have a custom image with docker installed (docker-in-docker). When running the image, the user needs to be $USERNAME (and not root). However the docker service required root to be started.
Getting docker to run as non-root seems to be overly complicated. So I have attempted to use su in the entry-point instead, which works, but it is not interactive.
FROM ubuntu:18.04
# ... A lot of steps here to install stuff that are not really relevant to the problem.
COPY container-helpers/entrypoint.sh .
USER root
ENV ENTRYUSER $USERNAME
ENTRYPOINT [ "./entrypoint.sh" ]
CMD "pulumi up"
And entrypoint.sh is:
#!/bin/bash
set -e
service docker start
export ENV_PATH=$PATH
su $ENTRY_USER -lp <<EOSU
set -e
export PATH=$ENV_PATH
. $NVM_DIR/nvm.sh
pulumi stack select -c dev
npx meteor-deploy stack configure default
$# # Run given argument as a command
EOSU
I run it as:
$ docker run --env-file local.env --privileged -it meteor-deploy-leaderboard
* Starting Docker: docker [ OK ]
Logging in using access token from PULUMI_ACCESS_TOKEN
error: --yes must be passed in to proceed when running in non-interactive mode
Or, if you don't want to take pulumi's word for it:
$ docker run --env-file local.env --privileged -it meteor-deploy-leaderboard bash; echo "exited"
* Starting Docker: docker [ OK ]
Logging in using access token from PULUMI_ACCESS_TOKEN
exited
Any idea how I can pass on the tty to the su command properly?
I followed the Oracle docs and managed to set up a running Weblogic Fusion Middleware Infrastructure Container with one Managed server..
I deployed an ADF application and it works perfectly fine ..
But now I am stuck because I cant add more Managed servers in the cluster.
The following command was used to start the managedserver1 which works perfectly..
docker run -d -p 9801:8001 --network=InfraNET --volumes-from InfraAdminContainer --name InfraManagedContainer --env-file ./infraServer.env.list container-registry.oracle.com/middleware/fmw-infrastructure:12.2.1.x startManagedServer.sh
here is the startManagedServer.sh script :
#!/bin/bash
# Copyright (c) 2014-2017 Oracle and/or its affiliates. All rights reserved.
#
#Licensed under the Universal Permissive License v 1.0 as shown at http://oss.oracle.com/licenses/upl.
#
export adminhostname=$adminhostname
export adminport=$adminport
# First Update the server in the domain
export server="infra_server1"
export DOMAIN_ROOT="/u01/oracle/user_projects/domains"
export DOMAIN_HOME="/u01/oracle/user_projects/domains/InfraDomain"
echo $adminhostname
echo $adminport
echo "DOMAIN_HOME: $DOMAIN_HOME"
/u01/oracle/oracle_common/common/bin/wlst.sh -skipWLSModuleScanning /u01/oracle/container-scripts/update_listenaddress.py $server
retval=$?
echo "RetVal from Update listener call $retval"
if [ $retval -ne 0 ];
then
echo "Update listener Failed.. Please check the Logs"
exit
fi
# Start Infra server
mkdir -p /u01/oracle/logs
$DOMAIN_HOME/bin/startManagedWebLogic.sh $server "http://"$adminhostname:$adminport > /u01/oracle/logs/startManagedWebLogic$$.log 2>&1 &
statusfile=/tmp/notifyfifo.$$
mkfifo "${statusfile}" || exit 1
{
# run tail in the background so that the shell can kill tail when notified that grep has exited
tail -f /u01/oracle/logs/startManagedWebLogic$$.log &
# remember tail's PID
tailpid=$!
# wait for notification that grep has exited
read templine <${statusfile}
echo ${templine}
# grep has exited, time to go
kill "${tailpid}"
} | {
grep -m 1 "<Notice> <WebLogicServer> <BEA-000360> <The server started in RUNNING mode.>"
# notify the first pipeline stage that grep is done
echo "RUNNING"> /u01/oracle/logs/startManagedWebLogic$$.status
echo "Infra server is running"
echo >${statusfile}
}
# clean up
rm "${statusfile}"
if [ -f /u01/oracle/logs/startManagedWebLogic$$.status ]; then
echo "Infra server has been started"
fi
#Display the logs
tail -f $DOMAIN_HOME/servers/infra_server1/logs/infra_server1.log
childPID=$!
wait $childPID
I did manage to add the Managed-Servers in the weblogic admin console by editing createorstartInfraDomain.sh and createInfraDomain.py
However editing the StartManagedServer.sh file for Infra_Server2 is not working..
Even after editing or even completely deleting the file startManagedServer.sh from the admin container the following command still works:
docker run -d -p 9801:8001 --network=InfraNET --volumes-from InfraAdminContainer --name InfraManagedContainer --env-file ./infraServer.env.list container-registry.oracle.com/middleware/fmw-infrastructure:12.2.1.x startManagedServer.sh
The following is what i get in the console :
root#Linux-Vostro-3250:/home/amalv/FMW-Infrastructure_Docker# docker run -p 9801:8001 --network=InfraNET --volumes-from InfraAdminContainer --name InfraManagedContainer --env-file ./infraserver.env.list oracle/fmw-infrastructure:12.2.1.0 startManagedServer.shInfraAdminContainer
7001
DOMAIN_HOME: /u01/oracle/user_projects/domains/InfraDomain
Initializing WebLogic Scripting Tool (WLST) ...
Welcome to WebLogic Server Administration Scripting Shell
Type help() for help on available commands
/u01/oracle/container-scripts/update_listenaddress.py called with the following sys.argv array:
sys.argv[0] = /u01/oracle/container-scripts/update_listenaddress.py
sys.argv[1] = infra_server1
c697c81b15c8
172.18.0.4
/u01/oracle/user_projects/domains/InfraDomain
INFO: SeedingConfigurationProcessor.start, finished.
INFO: SeedingConfigurationProcessor.end, finished.
whatever i do with the startManagedServer.sh I get the above log with "sys.argv[1] = infra_server1".
Can someone help me with this!!
Thanks a lot
Here is what i did that helped me to set up Multiple managed servers .
Initially i ran the following command which is from the instructions
in container-registry.oracle.com:
docker run -d -p 9001:7001 --network=InfraNET -v $HOST_VOLUME:/u01/oracle/user_projects --name InfraAdminContainer --env-file ./infraDomain.env.list container-registry.oracle.com/middleware/fmw-infrastructure:12.2.1.2
Then I copied my edited Container-scripts into the Container-scripts
in the Admin container.
Deleted the already created domain directory from the container.
committed the container to a new image.
alternatively you can mount the edited files into container in docker run command.
I edited the createInfraDomain.py and createOrStartInfradomain.sh files in the container-scripts to create 6 infra-servers.This will create 6 infraserver instances and you will be able to see it in the weblogic console.
Now use the following command to start the first Managed Server container:
docker run -d -p 9802:8001 --network=InfraNET --volumes-from InfraAdminContainer --name InfraManagedContainer --env-file ./infraserver.env.list previously-committed-image startManagedServer.sh
For startning a new managed server container I edited the startManagedServer.sh file and changed the server value to infra_server2 and ran the following command:
docker run -d -p 9802:8001 -v /path(or)location/of/edited/cotainer-scripts/in/your/hostSystem:/u01/oracle/container-scripts --network=InfraNET --volumes-from InfraAdminContainer --name InfraManagedContainer --env-file ./infraserver.env.list previously-committed-image startManagedServer.sh
For every new container I changed the server name in startNodeManager.sh and mounted it to the container in docker run command..
I am sure there is a much simpler way to add more servers may be by using WLST scripting to add server instances in weblogic..
and also to start new managed server containers..
if anyone knows please let us know.
Thanks!!
I saw some blog posts where people talk about JMeter and Docker. I understand that Docker will be helpful for setting up a container with all the dependencies. But they all run/create the containers in the same host. So ideally all the containers will share the host resources. It is like you run multiple instances of jmeter in the same host. It will not be helpful to generate more load.
When a host has 12GB RAM, I think 1 instance of JMeter with 10GB heap can generate more load than running 10 containers with 1 jmeter instance in each container.
What is the point of running docker here?
I made an automatic solution that can be easily integrated with Jenkins.
The dockerfile should be extended from java8 and add the JMeter build. This Docker image I will call jmeter-base:
FROM java:8
RUN mkdir /jmeter \
&& cd /jmeter/ \
&& wget https://archive.apache.org/dist/jmeter/binaries/apache-jmeter-3.3.tgz \
&& tar -xvzf apache-jmeter-3.3.tgz \
&& rm apache-jmeter-3.3.tgz
ENV JMETER_HOME /jmeter/apache-jmeter-3.3/
# Add Jmeter to the Path
ENV PATH $JMETER_HOME/bin:$PATH
If you want to use a master-slave solution, this is the jmeter master Dockerfile:
FROM jmeter-base
WORKDIR $JMETER_HOME
# Ports to be exposed from the container for JMeter Master
RUN mkdir scripts
EXPOSE 60000
And this is the jmeter slave Dockerfile:
FROM jmeter-base
# Ports to be exposed from the container for JMeter Slaves/Server
EXPOSE 1099 50000
# Application to run on starting the container
ENTRYPOINT $JMETER_HOME/bin/jmeter-server \
-Dserver.rmi.localport=50000 \
-Dserver_port=1099
Now, with the both images, you should execute a script to execute you should know all slave IPs. This script make all the job:
#!/bin/bash
COUNT=${1-1}
docker build -t jmeter-base jmeter-base
docker-compose build && docker-compose up -d && docker-compose scale master=1 slave=$COUNT
SLAVE_IP=$(docker inspect -f '{{.Name}} {{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' $(docker ps -aq) | grep slave | awk -F' ' '{print $2}' | tr '\n' ',' | sed 's/.$//')
WDIR=`docker exec -it master /bin/pwd | tr -d '\r'`
mkdir -p results
for filename in scripts/*.jmx; do
NAME=$(basename $filename)
NAME="${NAME%.*}"
eval "docker cp $filename master:$WDIR/scripts/"
eval "docker exec -it master /bin/bash -c 'mkdir $NAME && cd $NAME && ../bin/jmeter -n -t ../$filename -R$SLAVE_IP'"
eval "docker cp master:$WDIR/$NAME results/"
done
docker-compose stop && docker-compose rm -f
I came to understand from this post from a friend of mine that we should not be running multiple docker containers in the same host to generate more load.
http://www.testautomationguru.com/jmeter-distributed-load-testing-using-docker/
Instead the usage of docker here is to quickly setup the jmeter environment.
I am trying to start a few services in my container when it stars.
This is my entry_point script:
#!/bin/bash
set -e
mkdir -p /app/log
tail -n 0 -f /var/log/*.log &
tail -n 0 -f ./log/current.log &
# Start Gunicorn processes
#echo Starting Nginx.
#exec /etc/init.d/nginx start
echo Starting Gunicorn.
exec gunicorn app.main:app \
--name price_service \
-c config/gunicorn.conf \
"$#"
What i would like todo is to uncomment this line :
#exec /etc/init.d/nginx start
But at startup the container just hangs here.
Any solutions ?
You should read about http://man7.org/linux/man-pages/man3/exec.3.html to see what it is doing.
What you need is to either to background the process using & (not recommended), or use a process manager or init system (see Can I run multiple programs in a Docker container?, also not really recommended).
Or you can run multiple containers, and use docker-compose to manage them (recommended).
We have docker running on one machine
Workstation running on other machine
I want to do bootstrap from workstation on docker container then our image should be ssh enabled
How to make docker image ssh enabled.
Before you add ssh you should see if docker exec will be sufficient for what you need. (doc link)
If you do need SSH, the following Dockerfile should help (copied from Docker docs):
# sshd
#
# VERSION 0.0.2
FROM ubuntu:14.04
MAINTAINER Sven Dowideit <SvenDowideit#docker.com>
RUN apt-get update && apt-get install -y openssh-server
RUN mkdir /var/run/sshd
RUN echo 'root:screencast' | chpasswd
RUN sed -i 's/PermitRootLogin without-password/PermitRootLogin yes/' /etc/ssh/sshd_config
# SSH login fix. Otherwise user is kicked off after login
RUN sed 's#session\s*required\s*pam_loginuid.so#session optional pam_loginuid.so#g' -i /etc/pam.d/sshd
ENV NOTVISIBLE "in users profile"
RUN echo "export VISIBLE=now" >> /etc/profile
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]
Using the CMD command in your Dockerfile will indeed enable ssh
CMD ["/usr/sbin/sshd", "-D"]
But there is a huge downside. If you already have a CMD command (that starts MySQL for example), then you are facing a problem not easily resolved in Docker. You can use only one CMD in Dockerfile. But there is a workaround for that, using supervisor. What you do is tell Dockerfile to install Supervisor:
RUN apt-get install -y openssh-server supervisor
Using supervisor, you can start as many processes as you want on container startup. These processes are defined in supervisor.conf file (naming is arbitrary) located in the directory with your Dockerfile. In your Dockerfile you tell Docker to copy this file during building:
ADD supervisor-base.conf /etc/supervisor.conf
Then you tell Docker to start supervisor when container starts (when supervisor starts, supervisor will also start all processes listed in the supervisor.conf file mentioned above).
CMD ["supervisord", "-c", "/etc/supervisor.conf"]
Your supervisor.conf file may look like this:
[supervisord]
nodaemon=true
[program:sshd]
directory=/usr/local/
command=/usr/sbin/sshd -D
autostart=true
autorestart=true
redirect_stderr=true
There is one issue to be careful about. Supervisor needs to start as a root, otherwise it will throw errors. So if your Dockerfile defines an user to start container with (e.g USER jboss), then you should put USER root at the end of your Dockerfile, so that supervisor starts with root. In your supervisor.conf file you simply define a user for each process:
[program:wildfly]
user=jboss
command=/opt/jboss/wildfly/bin/standalone.sh -b 0.0.0.0 -bmanagement 0.0.0.0
[program:chef]
user=chef
command=/bin/bash -c chef-2.1/bin/start.sh
Of course, these users need to be pre-defined in your dockerfile. E.g.
RUN groupadd -r -f jboss -g 2000 && useradd -u 2000 -r -g jboss -m -d /opt/jboss -s /sbin/nologin -c "JBoss user" jboss
You can learn more about Supervisor+Docker+SSH in more details in this article.
Notice: this answer promotes a tool I've written.
Some answers here suggest to place an SSH server inside your container. Conceptually running multiple processes in one container is not the right approach (https://docs.docker.com/articles/dockerfile_best-practices/). A more favorable solution is one that involves multiple containers each running their own process/service. Linking them together would result in a coherent application.
I've created a containerized SSH server that you can 'stick' to any running container. This way you can create compositions with every container, without that container even knowing about ssh. The only requirement is that the container has bash.
The following example would start an SSH server attached to a container with name 'sshd-web-server1'.
docker run -ti --name sshd-web-server1 -e CONTAINER=web-server1 -p 2222:22 \
-v /var/run/docker.sock:/var/run/docker.sock -v $(which docker):/usr/bin/docker \
jeroenpeeters/docker-ssh
You connect to the SSH server with your ssh client of choice, just as you normally would.
Be adviced: Docker-SSH is currently still under development, but it does work! Please let me know what you think
For more pointers and documentation see: https://github.com/jeroenpeeters/docker-ssh
You can find prebuilt images with SSH installed, for instance CentOS tutum/centos and Debian tutum/debian
And the Dockerfiles used to build them
https://github.com/tutumcloud/tutum-centos/blob/master/Dockerfile
https://github.com/tutumcloud/tutum-debian/blob/master/Dockerfile