Docker container exited, not running - docker

I have java application to be run in one docker container which connect to myqsql db which is in another docker container , the problem is that javaserver container is exited and not running, mysql8server is running well.
I start running the shell script ./run.sh
#!/bin/bash
RECONNECT_BRIDGE=$(docker network ls | grep -c rconnect_bridge)
echo "RECONNECT_BRIDGE COUNT = $RECONNECT_BRIDGE"
if [ $RECONNECT_BRIDGE -ne 0 ]; then
docker network rm rconnect_bridge
echo "Removing previous reconnect bridge"
fi
docker network create rconnect_bridge
echo "reconnect_bridge has been successfully created"
MYSQL_CONTAINER=$(docker container ls -a | grep -c mysql8server)
echo "MYSQL_CONTAINER COUNT $MYSQL_CONTAINER"
if [ $MYSQL_CONTAINER -ne 0 ]; then
docker container stop mysql8server
docker container rm mysql8server
echo "Previous mysql8server stopped and removed"
fi
#check mysql directory
if [ ! -d "/u01/data/mysql" ]; then
mkdir -p /u01/data/mysql
chmod u+xrw /u01/data/mysql
echo "/u01/data/mysql folder has been created"
fi
#create mysql container
docker container run -d --name mysql8server --network rconnect_bridge -v /u01/data/mysql:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=root mysql:8.0.25
echo "waiting for the mysql server to be launched"
sleep 30
echo "launching mysql8server"
#Build the javaserver image
docker build -t javaserver:1.0 .
JAVA_CONTAINER=$(docker container ls -a | grep -c javaserver)
if [ $JAVA_CONTAINER -ne 0 ]; then
docker container stop javaserver
docker container rm javaserver
fi
docker container run -it -d --name javaserver --network rconnect_bridge javaserver:1.0 /bin/bash
echo "java server launched successfully"
Dockerfile
FROM ubuntu:21.04
ENV JAVA_HOME=/u01/data/jdk-11
ENV PATH=$PATH:${JAVA_HOME}/bin
RUN mkdir -p /u01/data
WORKDIR /u01/data
ADD https://download.java.net/openjdk/jdk11/ri/openjdk-11+28_linux-x64_bin.tar.gz .
RUN gunzip openjdk-11+28_linux-x64_bin.tar.gz
RUN tar -xvf openjdk-11+28_linux-x64_bin.tar
RUN rm -f openjdk-11+28_linux-x64_bin.tar
ADD https://archive.apache.org/dist/tomcat/tomcat-9/v9.0.45/bin/apache-tomcat-9.0.45.tar.gz .
RUN gunzip apache-tomcat-9.0.45.tar.gz
RUN tar -xvf apache-tomcat-9.0.45.tar
RUN rm -f apache-tomcat-9.0.45.tar
COPY target/rconnect.war /u01/data/apache-tomcat-9.0.45/webapps/
RUN echo "copying the war file to the destination"
COPY src/main/db/db-schema.sql /u01/data/
COPY startup.sh .
RUN chmod u+x /u01/data/startup.sh
ENTRYPOINT ["/u01/data/startup.sh"]
CMD ["tail","-f","/dev/null"]
startup shell script file
#!/bin/bash
set -e
mysql -uroot -proot -hmysql8server < /u01/data/db-schema.sql
echo "creating the db schema"
/u01/data/apache-tomcat-9.0.45/bin/startup.sh &
exec "$#"

Related

Shell script on running a docker container

I created a Dockerfile like below:
From alpine:latest
WORKDIR /
COPY ./init.sh .
CMD ["/bin/sh", "./init.sh"]
and a script file init.sh like below:
#!/bin/sh
mkdir -p mount_point
echo hello > ./mount_point/hello.txt
and I built an image using these:
docker build . -t test_build
and ran it as
docker container run --rm --name test_run -it test_build sh
where there are only two above files in the folder.
In the container, I can find the init.sh file with x (executable) as is in the host.
However, there is no folder mount_point which should be created by
CMD ["bin/sh", "./init.sh"]
Note that, when I run any of the below in the container, it successfully creates mount_point as I expected
sh init.sh
or
/bin/sh init.sh
and
sh -c ./init.sh
Could you tell me where I made mistakes?
When you do
docker container run --rm --name test_run -it test_build sh
the sh at the end overrides the CMD definition in the image and the CMD isn't run.
To verify that your script works, your can change the script to something like this
#!/bin/sh
echo Hello from the script!
mkdir -p mount_point
echo hello > ./mount_point/hello.txt
ls -al ./mount_point
Then run the image without the sh and you should see the 'Hello' message and the directory listing from the ./mount_point directory.
docker container run --rm --name test_run test_build

docker-compose stop / start my_image

Is it normal to lose all data, installed applications and created folders inside a container when executing docker-compose stop my_image and docker-compose start my_image?
I'm creating container with docker-compose up --scale my_image=4
update no. 1
my containers have sshd server running in them. When I connect to a container execute touch test.txt I see that the file was created.
However, after executing docker-compose stop my_image and docker-compose start my_image a container is empty and ls -l shows absence of file test.txt
update no. 2
my Dockerfile
FROM oraclelinux:8.5
RUN (yum update -y; \
yum install -y openssh-server openssh-clients initscripts wget passwd tar crontabs unzip; \
yum clean all)
RUN (ssh-keygen -A; \
sed -i 's/UsePAM yes/#UsePAM yes/g' /etc/ssh/sshd_config; \
sed -i 's/#UsePAM no/UsePAM no/g' /etc/ssh/sshd_config; \
sed -i 's/#PermitRootLogin yes/PermitRootLogin yes/' /etc/ssh/sshd_config; \
sed -i 's/#PasswordAuthentication yes/PasswordAuthentication yes/' /etc/ssh/sshd_config)
RUN (mkdir -p /root/.ssh/; \
echo "StrictHostKeyChecking=no" > /root/.ssh/config; \
echo "UserKnownHostsFile=/dev/null" >> /root/.ssh/config)
RUN echo "root:oraclelinux" | chpasswd
COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
EXPOSE 22
my docker-compose
version: '3.9'
services:
my_image:
build:
context: .
dockerfile: Dockerfile
ports:
- 30000-30007:22
when I connect to a container
Execute touch test.txt
Execute docker-compose stop my_image
Execute docker-compose start my_image
Execute ls -l
I see no file test.txt (in fact I see that the folder is empty)
update no. 3
entrypoint.sh
#!/bin/sh
# Start the ssh server
/usr/sbin/sshd -D
# Execute the CMD
exec "$#"
Other details
When containers are all up and running, I choose a container running
on a specific port, say port 30001, then using putty I connect to that specific container,
execute touch test.txt
execute ls -l
I do see that the file was created
I execute docker-compose stop my_image
I execute docker-compose start my_image
I connect via putty to port 30001
I execute ls -l
I see no file (folder is empty)
I try other containers to see if file exists inside one of them, but
I see no file present.
So, after a brutal brute force debugging I realized that I lose data
only when I fail to disconnect from ssh before stopping / restarting
container. When I do disconnect data does not disappear after stopping / restarting

Why exited docker conatiner are not getting removed?

File name: dockerHandler.sh
#!/bin/bash
set -e
to=$1
shift
cont=$(docker run -d "$#")
code=$(timeout "$to" docker wait "$cont" || true)
docker kill $cont &> /dev/null
docker rm $cont
echo -n 'status: '
if [ -z "$code" ]; then
echo timeout
else
echo exited: $code
fi
echo output:
# pipe to sed simply for pretty nice indentation
docker logs $cont | sed 's/^/\t/'
docker rm $cont &> /dev/null
But whenever I check the docker container status after running the the docker image it is giving list of exited docker containers.
command: docker ps -as
Hence to delete those exited containers I am running manually below command
rm $(docker ps -a -f status=exited -q)
You should add the flag --rm to your docker command:
From Docker man:
➜ ~ docker run --help | grep rm
--rm Automatically remove the container when it exits
removed lines
docker kill $cont &> /dev/null
docker rm $cont
docker logs $cont | sed 's/^/\t/'
and used gtimeout instead timeout in Mac, it works fine.
To install gtimeout on Mac:
Installing CoreUtils
brew install coreutils
In line 8 of DockerTimeout.sh change timeout to gtimeout

Jmeter and Docker

I saw some blog posts where people talk about JMeter and Docker. I understand that Docker will be helpful for setting up a container with all the dependencies. But they all run/create the containers in the same host. So ideally all the containers will share the host resources. It is like you run multiple instances of jmeter in the same host. It will not be helpful to generate more load.
When a host has 12GB RAM, I think 1 instance of JMeter with 10GB heap can generate more load than running 10 containers with 1 jmeter instance in each container.
What is the point of running docker here?
I made an automatic solution that can be easily integrated with Jenkins.
The dockerfile should be extended from java8 and add the JMeter build. This Docker image I will call jmeter-base:
FROM java:8
RUN mkdir /jmeter \
&& cd /jmeter/ \
&& wget https://archive.apache.org/dist/jmeter/binaries/apache-jmeter-3.3.tgz \
&& tar -xvzf apache-jmeter-3.3.tgz \
&& rm apache-jmeter-3.3.tgz
ENV JMETER_HOME /jmeter/apache-jmeter-3.3/
# Add Jmeter to the Path
ENV PATH $JMETER_HOME/bin:$PATH
If you want to use a master-slave solution, this is the jmeter master Dockerfile:
FROM jmeter-base
WORKDIR $JMETER_HOME
# Ports to be exposed from the container for JMeter Master
RUN mkdir scripts
EXPOSE 60000
And this is the jmeter slave Dockerfile:
FROM jmeter-base
# Ports to be exposed from the container for JMeter Slaves/Server
EXPOSE 1099 50000
# Application to run on starting the container
ENTRYPOINT $JMETER_HOME/bin/jmeter-server \
-Dserver.rmi.localport=50000 \
-Dserver_port=1099
Now, with the both images, you should execute a script to execute you should know all slave IPs. This script make all the job:
#!/bin/bash
COUNT=${1-1}
docker build -t jmeter-base jmeter-base
docker-compose build && docker-compose up -d && docker-compose scale master=1 slave=$COUNT
SLAVE_IP=$(docker inspect -f '{{.Name}} {{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' $(docker ps -aq) | grep slave | awk -F' ' '{print $2}' | tr '\n' ',' | sed 's/.$//')
WDIR=`docker exec -it master /bin/pwd | tr -d '\r'`
mkdir -p results
for filename in scripts/*.jmx; do
NAME=$(basename $filename)
NAME="${NAME%.*}"
eval "docker cp $filename master:$WDIR/scripts/"
eval "docker exec -it master /bin/bash -c 'mkdir $NAME && cd $NAME && ../bin/jmeter -n -t ../$filename -R$SLAVE_IP'"
eval "docker cp master:$WDIR/$NAME results/"
done
docker-compose stop && docker-compose rm -f
I came to understand from this post from a friend of mine that we should not be running multiple docker containers in the same host to generate more load.
http://www.testautomationguru.com/jmeter-distributed-load-testing-using-docker/
Instead the usage of docker here is to quickly setup the jmeter environment.

docker container volumes from directory access in CMD instruction

docker container volumes from directory access in CMD instruction
$ sudo docker run -d --name ext -v /external busybox /bin/sh
and
run.sh
#!/bin/bash
if [[ -f "/external" ]]
then
echo 'success!'
else
echo 'Sorry, I can't find /external...'
fi
and
Dockerfile
FROM ubuntu:14.04
MAINTAINER newbie
ADD run.sh /run.sh
RUN chmod +x /run.sh
CMD ["bash", "/run.sh"]
and
$ sudo docker build -t app .
and
$ sudo docker run -d --volumes-from ext app
ac57afb95f923eeffd28e7d9d9cb76cb1b7699ebd
So
$ sudo docker logs ac57afb95f923eeffd28e7d9d9cb76cb1b7699ebd
Sorry, I can't find /external...
My question is,
How can I access /external directory in run.sh in CMD instruction
impossible?
Thank you~
modify your run.sh
-f is check for file exists. in this case use -d check for directory exists.
Check if a directory exists in a shell script
futhermore if you want make only volume container, need not add -d, /bin/sh
volume container run command change like this
$ sudo docker run --name ext -v /external busybox

Resources