hazelcast docker container can't run continuously - docker

I build a hazelcast docker container.But when I run hazelcast container,it only prints some starting logs without really running.
The Dockerfile is:
#centos7_jdk7 is a centos7 operating system installing jdk7
FROM tianshangdeyun/centos7_jdk7
#hazelcast-3.6.1 is download from hazelcast offical site
COPY hazelcast-3.6.1 /hazelcast-3.6.1
#add start hazelcast script
COPY run.sh /run.sh
RUN chmod 777 /run.sh
expose 5701
CMD ["/run.sh"]
The run.sh is:
#!/bin/bash
/hazelcast-3.6.1/bin/server.sh
I run the hazelcast container with 'docker run hazelcast:3.6.1'.
The log is prints is:
But 'docker ps' can't see the process.
Wish your help.

The problem is that server.sh starts the java application that does not run in foreground. This means that server.sh starts the server, exit itself, so your run.sh script also exit, and docker thinks the work is done and exit, even though hazelcast is still running. This is a common problem when dockerizing some application.
As far as I can tell, I don't see a native way to run hazelcast in foreground. What you can do then is modify server.sh. In this case, the modification is very easy, all you have to do is to add a wait
statement in server.sh, towards the end, after the echo $! > ${PID_FILE}
if [ -z "${PID}" ]; then
echo "Process id for hazelcast instance is written to location: {$PID_FILE}"
$RUN_JAVA -server $JAVA_OPTS com.hazelcast.core.server.StartServer &
echo $! > ${PID_FILE}
wait
else
echo "Another hazelcast instance is already started in this folder. To start a new instance, please unzip 3.6.1.zip/tar.gz in a new folder."
exit 0
fi
The wait statement will wait until the java application is terminated, and then return, so your run.sh will return, and your docker container will exit.
Try that, it will work!

Related

Google Cloud Run error: Container failed to start (DSS - Digital Signature Service)

i'm trying to get the following docker container running on the google cloud. The container works locally. In the cloud shell, the container also works with "docker run". On the google cloud i can see the port 8080 web preview. When I create a service, the container does not start. The log only says "tomcat started, container called exit (0)".
I added address = 0.0.0.0 to the connector in the server.xml. But that didn't work either.
Maybe someone can give me a hint.
Thank you
Tom
FROM openjdk:8-alpine
RUN apk update && apk add unzip
ADD https://ec.europa.eu/cefdigital/artifact/repository/esignaturedss/eu/europa/ec/joinup/sd-dss/dss-demo-bundle/5.8.1/dss-demo-bundle-5.8.1.zip /tmp
RUN unzip /tmp/dss-demo-bundle-5.8.1.zip -d /tmp
RUN mv /tmp/dss-demo-bundle-5.8.1 /dss
RUN chmod +x /dss/apache-tomcat-8.5.61/bin/catalina.sh
COPY ./startup.sh /dss/
ENTRYPOINT [ "/dss/startup.sh" ]
CMD [ "/bin/sh" ]
This is the sourcecode of startup.sh
#!/bin/sh
set -e
echo "`/bin/sh /dss/apache-tomcat-8.5.61/bin/startup.sh`"
exec "$#"
Thank you, the solution was, i change the tomcat startup to "catalina.sh run", to start tomcat as forground process.
The second thing: i had to remove the "address = 0.0.0.0" in the tomcat server.xml file
#!/bin/sh
set -e
echo "`/bin/sh /dss/apache-tomcat-8.5.61/bin/catalina.sh run`"
exec "$#"

Start Node manager in Weblogic (Docker) using script.

I tried to dockerize weblogic server. Now I am facing a issue with Starting node manager after server is started inside the docker container. My docker file as below.
FROM oracle/weblogic:12.1.3-generic
ENV JAVA_OPTIONS="${JAVA_OPTIONS} -
Dweblogic.nodemanager.SecureListener=false" \
ADMIN_PORT="7001" \
ADMIN_HOST="localhost"
USER oracle
COPY dockerfiles/keyStore/keystore_ss.jks /u01/oracle/keystore/
COPY dockerfiles/patch/* /u01/oracle/patch/
COPY dockerfiles/local_domainScripts /u01/oracle/local_domainScripts/
COPY dockerfiles/scripts/* /u01/oracle/
COPY dockerfiles/applicationFiles/ /u01/oracle/applicationFiles/
USER root
RUN yum install -y procps
RUN chmod +x startWeblogic.sh
USER oracle
RUN /u01/oracle/wlst /u01/oracle/local_domainScripts/config.py
RUN nohup bash -c "/u01/oracle/user_projects/domains/local_domain/bin/startNodeManager.sh &" && sleep 4
CMD ["/u01/oracle/user_projects/domains/local_domain/startWebLogic.sh"]
This will create weblogic server instance. I want to start node manager after this server is started.
Run command:
docker run -d --name wls_local_domain --network=host --hostname localhost -p 7001:7001 test-docker:0-SNAPSHOT
When ./startNodeManager.sh is executed inside the container that will start the node manager. To start the node manager, weblogic server need to be started first.
I want to this using bash script. I tried this one but it didn't help
github link
You can't (usefully) RUN a background process. That Dockerfile command launches an intermediate container executing the RUN command, saves its filesystem, and exits; there is no process running any more by the time the next Dockerfile command executes.
If this is a commercially maintained image, you might look into whether Oracle has intstructions on how to use it. (From clicking around, none of the samples there start a node manager; is it necessary?)
Best practice is generally to run only one server in a Docker container (and ideally in the foreground and as the container's main process). If that will work and there aren't shared filesystem dependencies, you can split all of this except the final CMD into one base Dockerfile, then have two additional Dockerfiles that just have a FROM line pointing at your mostly-built image and a requested CMD.
If that really won't work then you'll have to fall back to running some init system in your container, typically supervisord.
You need to start the node manager as a background process then start the server. In order to keep alive the docker container while you are running the background processes, you can use the tail command.
This is how I start the node managed and the WebLogic server in my container:
#!/bin/bash
# ------------------------------------------------------------------------------
# start the Node Manager
# ------------------------------------------------------------------------------
function startNodeManager() {
echo "starting the node manager for $MANAGED_SERVER_NAME server..."
"$ORACLE_HOME/user_projects/domains/$DOMAIN_NAME/bin/startNodeManager.sh" &
while ! nc -z "$HOSTNAME" "$NODE_MANAGER_PORT"; do
sleep 0.5
done
echo "node manager is up and ready to receive requests"
}
# ------------------------------------------------------------------------------
# start the WebLogic Admin server
# ------------------------------------------------------------------------------
function startAdminServer() {
echo "starting the $ADMIN_SERVER_NAME server..."
local logHome
logHome="$ORACLE_HOME/user_projects/domains/$DOMAIN_NAME/servers/$ADMIN_SERVER_NAME/logs"
mkdir -p "$logHome"
"$ORACLE_HOME/user_projects/domains/$DOMAIN_NAME/bin/startWebLogic.sh" > "$logHome/$ADMIN_SERVER_NAME.out" 2>&1 &
}
# ------------------------------------------------------------------------------
# main app starts here
# ------------------------------------------------------------------------------
startNodeManager
startAdminServer
# this command keeps alive the docker container
tail -F \
"$ORACLE_HOME/user_projects/domains/$DOMAIN_NAME/servers/$ADMIN_SERVER_NAME/logs/$ADMIN_SERVER_NAME.log" \
"$ORACLE_HOME/user_projects/domains/$DOMAIN_NAME/servers/$ADMIN_SERVER_NAME/logs/$ADMIN_SERVER_NAME.nohup" \
"$ORACLE_HOME/user_projects/domains/$DOMAIN_NAME/servers/$ADMIN_SERVER_NAME/logs/$ADMIN_SERVER_NAME.out"
This is a complete startup script that you can use as an example and improve it. It starts the node manager and the admin server: https://github.com/zappee/docker-images/blob/master/oracle-weblogic/oracle-weblogic-12.2.1.4-admin-server/container-scripts/startup.sh
From here you can download the complete working solution.

Keep the docker container running while stopping and starting the process

I have a spring boot jar which I am invoking on running a Docker container. Everything is running fine.
Now, there are certain other operations also this jar support. In order to use those operations, I have to invoke the jar again (going inside the container) passing in the required parameters. The problem is that some operations kill the already running process, do whatever change is required then starts the app again. As soon the process gets killed the Docker container also stops.
How to keep the container running during the whole process?
I will not discuss about restarting automatically a killed container, since it would not answer your question (but depending on your situation, you may perhaps ask yourself why this solution does not fit your needs).
The container is stopped when the main process launched by the entrypoint, defined in your image, is killed in the container. So, to avoid stopping the container, use an entrypoint that does not stop when some operations need to restart the java app. More over, this entrypoint could itself launch the operations, it would then be a process controler for your java application.
Here is a Dockerfile with such an example, as you can see the entrypoint is a specific shell, not directly the java container.
From [...]
EXPOSE 443
[...]
COPY entrypoint.sh /usr/local/bin
CMD chmod 755 /usr/local/bin/entrypoint.sh
ENTRYPOINT ["/usr/local/bin/entrypoint.sh"]
Now, write your entrypoint.sh this way:
#!/bin/bash
[...]
# launch your spring boot jar in a subprocess
java -jar target/myproject-0.0.1-SNAPSHOT.jar > /dev/null 2>&1 &
# or
mvn spring-boot:run > /dev/null 2>&1 &
# you may detach your java process from the shell job list, if needed
disown %1
# now wait infinitely for a "docker stop", that should be the only way to stop this container
while sleep 1
do
echo waiting for this container to be terminated
# if needed, launch your app again (in case it has been terminated and not relaunched automatically)
if ! ps auxgww | grep -v grep | grep java
then
java -jar target/myproject-0.0.1-SNAPSHOT.jar > /dev/null 2>&1 &
# or
mvn spring-boot:run > /dev/null 2>&1 &
# you may detach your java process from the shell job list, if needed
disown %1
fi
done

Run command in Docker Container only on the first start

I have a Docker Image which uses a Script (/bin/bash /init.sh) as Entrypoint. I would like to execute this script only on the first start of a container. It should be omitted when the containers is restarted or started again after a crash of the docker daemon.
Is there any way to do this with docker itself, or do if have to implement some kind of check in the script?
I had the same issue, here a simple procedure (i.e. workaround) to solve it:
Step 1:
Create a "myStartupScript.sh" script that contains this code:
CONTAINER_ALREADY_STARTED="CONTAINER_ALREADY_STARTED_PLACEHOLDER"
if [ ! -e $CONTAINER_ALREADY_STARTED ]; then
touch $CONTAINER_ALREADY_STARTED
echo "-- First container startup --"
# YOUR_JUST_ONCE_LOGIC_HERE
else
echo "-- Not first container startup --"
fi
Step 2:
Replace the line "# YOUR_JUST_ONCE_LOGIC_HERE" with the code you want to be executed only the first time the container is started
Step 3:
Set the scritpt as entrypoint of your Dockerfile:
ENTRYPOINT ["/myStartupScript.sh"]
In summary, the logic is quite simple, it checks if a specific file is present in the filesystem; if not, it creates it and executes your just-once code. The next time you start your container the file is in the filesystem so the code is not executed.
The entry point for a docker container tells the docker daemon what to run when you want to "run" that specific container. Let's ask the questions "what the container should run when it's started the second time?" or "what the container should run after being rebooted?"
Probably, what you are doing is following the same approach you do with "old-school" provisioning mechanisms. Your script is "installing" the needed scripts and you will run your app as a systemd/upstart service, right? If you are doing that, you should change that into a more "dockerized" definition.
The entry point for that container should be a script that actually launches your app instead of setting things up. Let's say that you need java installed to be able to run your app. So in the dockerfile you set up the base container to install all the things you need like:
FROM alpine:edge
RUN apk --update upgrade && apk add openjdk8-jre-base
RUN mkdir -p /opt/your_app/ && adduser -HD userapp
ADD target/your_app.jar /opt/your_app/your-app.jar
ADD scripts/init.sh /opt/your_app/init.sh
USER userapp
EXPOSE 8081
CMD ["/bin/bash", "/opt/your_app/init.sh"]
Our containers, at the company I work for, before running the actual app in the init.sh script they fetch the configs from consul (instead of providing a mount point and place the configs inside the host or embedded them into the container). So the script will look something like:
#!/bin/bash
echo "Downloading config from consul..."
confd -onetime -backend consul -node $CONSUL_URL -prefix /cfgs/$CONSUL_APP/$CONSUL_ENV_NAME
echo "Launching your-app..."
java -jar /opt/your_app/your-app.jar
One advice I can give you is (in my really short experience working with containers) treat your containers as if they were stateless once they are provisioned (all the commands you run before the entry point).
I had to do this and I ended up doing a docker run -d which just created a detached container and started bash (in the background) followed by a docker exec, that did the necessary initialization. here's an example
docker run -itd --name=myContainer myImage /bin/bash
docker exec -it myContainer /bin/bash -c /init.sh
Now when I restart my container I can just do
docker start myContainer
docker attach myContainer
This may not be ideal but work fine for me.
I wanted to do the same on windows container. It can be achieved using task scheduler on windows. Linux equivalent for task Scheduler is cron. You can use that in your case. To do this edit the dockerfile and add the following line at the end
WORKDIR /app
COPY myTask.ps1 .
RUN schtasks /Create /TN myTask /SC ONSTART /TR "c:\WINDOWS\system32\WindowsPowerShell\v1.0\powershell.exe C:\app\myTask.ps1" /ru SYSTEM
This Creates a task with name myTask runs it ONSTART and the task its self is to execute a powershell script placed at "c:\app\myTask.ps1".
This myTask.ps1 script will do whatever Initialization you need to do on the container startup. Make sure you delete this task once it is executed successfully or else it will run at every startup. To delete it you can use the following command at the end of myTask.ps1 script.
schtasks /Delete /TN myTask /F

How can I auto-start a service in Docker using OpenRC (Alpine)?

I'm using an Alpine flavor from iron.io. I want to auto-run a trivial 'blink' script as a service when the Docker image starts. (I want derivative images that use this as a base to not know/care about this service--it'd just be "inherited" and run.) I was using S6 for this, and that works well, but wanted to see if something already built into Alpine would work out-of-the-box.
My Dockerfile:
FROM iron/scala
ADD blinkin /bin/
ADD blink /etc/init.d/
RUN rc-update add blink default
And my service script:
#!/sbin/openrc-run
command="/bin/blinkin"
depend()
{
need localmount
}
The /bin/blinkin script:
#!/bin/bash
for I in $(seq 1 5);
do
echo "BLINK!"
sleep 1
done
So I build the Docker image and run it. I see no output (BLINK!...) My script is in /bin and I can run it, and that works. My blink script is in /etc/init.d and symlinked to /etc/runlevels/default. So everything looks ok, but it doesn't seem as anything has run.
If I try: 'rc-service blink start' I see no "BLINK!" outbut, but do get this:
* WARNING: blink is already starting
What am I doing wrong?
You may find my dockerfy utility useful starting services, pre-running initialization commands before the primary command starts. See https://github.com/markriggins/dockerfy
For example:
RUN wget https://github.com/markriggins/dockerfy/releases/download/0.2.4/dockerfy-linux-amd64-0.2.4.tar.gz; \
tar -C /usr/local/bin -xvzf dockerfy-linux-amd64-*tar.gz; \
rm dockerfy-linux-amd64-*tar.gz;
ENTRYPOINT dockerfy
COMMAND --start bash -c "while true; do echo BLINK; sleep 1; done" -- \
--reap -- \
nginx
Would run a bash script as a service, echoing BLINK every second, while the primary command nginx runs. If nginx exits, then the BLINK service will automatically be stopped.
As an added benefit, any zombie processes left over by nginx will be automatically cleaned up.
You can also tail log files such as /var/log/nginx/error.log to stderr, edit nginx's configuration prior to startup and much more

Resources