I tried to dockerize weblogic server. Now I am facing a issue with Starting node manager after server is started inside the docker container. My docker file as below.
FROM oracle/weblogic:12.1.3-generic
ENV JAVA_OPTIONS="${JAVA_OPTIONS} -
Dweblogic.nodemanager.SecureListener=false" \
ADMIN_PORT="7001" \
ADMIN_HOST="localhost"
USER oracle
COPY dockerfiles/keyStore/keystore_ss.jks /u01/oracle/keystore/
COPY dockerfiles/patch/* /u01/oracle/patch/
COPY dockerfiles/local_domainScripts /u01/oracle/local_domainScripts/
COPY dockerfiles/scripts/* /u01/oracle/
COPY dockerfiles/applicationFiles/ /u01/oracle/applicationFiles/
USER root
RUN yum install -y procps
RUN chmod +x startWeblogic.sh
USER oracle
RUN /u01/oracle/wlst /u01/oracle/local_domainScripts/config.py
RUN nohup bash -c "/u01/oracle/user_projects/domains/local_domain/bin/startNodeManager.sh &" && sleep 4
CMD ["/u01/oracle/user_projects/domains/local_domain/startWebLogic.sh"]
This will create weblogic server instance. I want to start node manager after this server is started.
Run command:
docker run -d --name wls_local_domain --network=host --hostname localhost -p 7001:7001 test-docker:0-SNAPSHOT
When ./startNodeManager.sh is executed inside the container that will start the node manager. To start the node manager, weblogic server need to be started first.
I want to this using bash script. I tried this one but it didn't help
github link
You can't (usefully) RUN a background process. That Dockerfile command launches an intermediate container executing the RUN command, saves its filesystem, and exits; there is no process running any more by the time the next Dockerfile command executes.
If this is a commercially maintained image, you might look into whether Oracle has intstructions on how to use it. (From clicking around, none of the samples there start a node manager; is it necessary?)
Best practice is generally to run only one server in a Docker container (and ideally in the foreground and as the container's main process). If that will work and there aren't shared filesystem dependencies, you can split all of this except the final CMD into one base Dockerfile, then have two additional Dockerfiles that just have a FROM line pointing at your mostly-built image and a requested CMD.
If that really won't work then you'll have to fall back to running some init system in your container, typically supervisord.
You need to start the node manager as a background process then start the server. In order to keep alive the docker container while you are running the background processes, you can use the tail command.
This is how I start the node managed and the WebLogic server in my container:
#!/bin/bash
# ------------------------------------------------------------------------------
# start the Node Manager
# ------------------------------------------------------------------------------
function startNodeManager() {
echo "starting the node manager for $MANAGED_SERVER_NAME server..."
"$ORACLE_HOME/user_projects/domains/$DOMAIN_NAME/bin/startNodeManager.sh" &
while ! nc -z "$HOSTNAME" "$NODE_MANAGER_PORT"; do
sleep 0.5
done
echo "node manager is up and ready to receive requests"
}
# ------------------------------------------------------------------------------
# start the WebLogic Admin server
# ------------------------------------------------------------------------------
function startAdminServer() {
echo "starting the $ADMIN_SERVER_NAME server..."
local logHome
logHome="$ORACLE_HOME/user_projects/domains/$DOMAIN_NAME/servers/$ADMIN_SERVER_NAME/logs"
mkdir -p "$logHome"
"$ORACLE_HOME/user_projects/domains/$DOMAIN_NAME/bin/startWebLogic.sh" > "$logHome/$ADMIN_SERVER_NAME.out" 2>&1 &
}
# ------------------------------------------------------------------------------
# main app starts here
# ------------------------------------------------------------------------------
startNodeManager
startAdminServer
# this command keeps alive the docker container
tail -F \
"$ORACLE_HOME/user_projects/domains/$DOMAIN_NAME/servers/$ADMIN_SERVER_NAME/logs/$ADMIN_SERVER_NAME.log" \
"$ORACLE_HOME/user_projects/domains/$DOMAIN_NAME/servers/$ADMIN_SERVER_NAME/logs/$ADMIN_SERVER_NAME.nohup" \
"$ORACLE_HOME/user_projects/domains/$DOMAIN_NAME/servers/$ADMIN_SERVER_NAME/logs/$ADMIN_SERVER_NAME.out"
This is a complete startup script that you can use as an example and improve it. It starts the node manager and the admin server: https://github.com/zappee/docker-images/blob/master/oracle-weblogic/oracle-weblogic-12.2.1.4-admin-server/container-scripts/startup.sh
From here you can download the complete working solution.
Related
I have a dockerfile image based on ubuntu. Iam trying to make a bash script run each day but the cron never runs. When the container is running, i check if cron is running and it is. the bash script works perfectly and the crontab command is well copied inside the container. i can't seem to find where the problem is coming from.
Here is the Dockerfile:
FROM snipe/snipe-it:latest
ENV TZ=America/Toronto
RUN apt-get update \
&& apt-get install awscli -y \
&& apt-get clean \
&& apt-get install cron -y \
&& rm -rf /var/lib/apt/lists/*
RUN mkdir /var/www/html/backups_scripts /var/www/html/config/scripts
COPY config/crontab.txt /var/www/html/backups_scripts
RUN /usr/bin/crontab /var/www/html/backups_scripts/crontab.txt
COPY config/scripts/backups.sh /var/www/html/config/scripts
CMD ["cron","-f"]
The last command CMD doesn't work. And as soon as i remove the cmd command i get this message when i check the cron task inside the container:
root#fcfb6052274a:/var/www/html# /etc/init.d/cron status
* cron is not running
Even if i start the cron process before the crontab, the crontab is still not launched
This dockerfile is called by a docker swarm file (compose file type). Maybe the cron must be activated with the compose file.
How can i tackle this problem ??? Thank you
You need to approach this differently, as you have to remember that container images and containers are not virtual machines. They're a single process that starts and is maintained through its lifecycle. As such, background processes (like cron) don't exist in a container.
What I've seen most people do is have the container just execute whatever you're looking for it to do on a job like do_the_thing.sh and then using the docker run command on on the host machine to call it via cron.
So for sake of argument, let's say you had an image called myrepo/task with a default entrypoint of do_the_thing.sh
On the host, you could add an entry to crontab:
# m h dom mon dow user command
0 */2 * * * root docker run --rm myrepo/task
Then it's down to a question of design. If the task needs files, you can pass them down via volume. If it needs to put something somewhere when it's done, maybe look at blob storage.
I think this question is a duplicate, with a detailed response with lots of upvotes here. I followed the top-most dockerfile example without issues.
Your CMD running cron in the foreground isn't the problem. I ran a quick version of your docker file and exec'ing into the container I could confirm cron was running. Recommend checking how your cron entries in the crontab file are re-directing their output.
Expanding on one of the other answers here a container is actually a lot like a virtual machine, and often they do run many processes concurrently. If you happen to have any other containers running you might be able to see this most easily by running docker stats and looking at the PID column.
Also, easy to examine interactively yourself like this:
$ # Create a simple ubuntu running container named my-ubuntu
$ docker run -it -h my-ubuntu ubuntu
root#my-ubuntu$ ps aw # Shows bash and ps processes running.
root#my-ubuntu$ # Launch a ten minute sleep in the background.
root#my-ubuntu$ sleep 600 &
root#my-ubuntu$ ps aw # Now shows sleep also running.
I'm using Search Guard plugin to secure an elasticsearch cluster composed of multiple nodes.
Here is my Dockerfile:
#!/bin/sh
FROM docker.elastic.co/elasticsearch/elasticsearch:5.6.3
USER root
# Install search guard
RUN bin/elasticsearch-plugin install --batch com.floragunn:search-guard-5:5.6.3-16 \
&& chmod +x \
plugins/search-guard-5/tools/hash.sh \
plugins/search-guard-5/tools/sgadmin.sh \
bin/init_sg.sh \
&& chown -R elasticsearch:elasticsearch /usr/share/elasticsearch
USER elasticsearch
To initialize SearchGuard (create internal users and assign roles). I need to run the script init_sg.sh after the container startup.
Here is the problem: Unless elasticsearch is running, the script will not initialize any security index.
The script's content is :
sleep 10
plugins/search-guard-5/tools/sgadmin.sh -cd config/ -ts config/truststore.jks -ks config/kirk-keystore.jks -nhnv -icl
Now, I just run the script manually after the container startup but since I'm running it on Kubernetes.. Pods may get killed or fail and get recreated automatically for some reason. In this case, the plugin have to be initialized automatically after the container startup!
So how to accomplish this? Any help or hint would be really appreciated.
The image itself has an entrypoint ENTRYPOINT ["/run/entrypoint.sh"] specified in the Dockerfile. You can replace it by your own script. So for example create a new script, mount it and first call /run/entrypoint.sh and then wait for start of elasticsearch before running your init_sg.sh.
Not sure this will solves your problem, but its worth check my repo'sDockerfile
I have created a simple run.sh file copied to docker image and in the Dockerfile I wrote CMD ["run.sh"]. In the same way define whatever you want in run.sh and write CMD ["run.sh"]. You can find another example like below
Dockerfile
FROM java:8
RUN apt-get update && apt-get install stress-ng -y
ADD target/restapp.jar /restapp.jar
COPY dockerrun.sh /usr/local/bin/dockerrun.sh
RUN chmod +x /usr/local/bin/dockerrun.sh
CMD ["dockerrun.sh"]
dockerrun.sh
#!/bin/sh
java -Dserver.port=8095 -jar /restapp.jar &
hostname="hostname: `hostname`"
nohup stress-ng --vm 4 &
while true; do
sleep 1000
done
This is addressed in the documentation here: https://docs.docker.com/config/containers/multi-service_container/
If one of your processes depends on the main process, then start your helper process FIRST with a script like wait-for-it, then start the main process SECOND and remove the fg %1 line.
#!/bin/bash
# turn on bash's job control
set -m
# Start the primary process and put it in the background
./my_main_process &
# Start the helper process
./my_helper_process
# the my_helper_process might need to know how to wait on the
# primary process to start before it does its work and returns
# now we bring the primary process back into the foreground
# and leave it there
fg %1
I was trying to solve the exact problem. Here's the approach that worked for me.
Create a separate shell script that checks for ES status, and only start initialization of SG when ES is ready:
Shell Script
#!/bin/sh
echo ">>>> Right before SG initialization <<<<"
# use while loop to check if elasticsearch is running
while true
do
netstat -uplnt | grep :9300 | grep LISTEN > /dev/null
verifier=$?
if [ 0 = $verifier ]
then
echo "Running search guard plugin initialization"
/elasticsearch/plugins/search-guard-6/tools/sgadmin.sh -h 0.0.0.0 -cd plugins/search-guard-6/sgconfig -icl -key config/client.key -cert config/client.pem -cacert config/root-ca.pem -nhnv
break
else
echo "ES is not running yet"
sleep 5
fi
done
Install script in Dockerfile
You will need to install the script in container so it's accessible after it starts.
COPY sginit.sh /
RUN chmod +x /sginit.sh
Update entrypoint script
You will need to edit the entrypoint script or run script of your ES image. So that it starts the sginit.sh in the background BEFORE starting ES process.
# Run sginit in background waiting for ES to start
/sginit.sh &
This way the sginit.sh will start in the background, and will only initialize SG after ES is started.
The reason to have this sginit.sh script starts before ES in the background is so that it's not blocking ES from starting. The same logic applies if you put it after starting of ES, it will never run unless you put the starting of ES in the background.
I would suggest to put the CMD in you docker file to execute the script when the container start
FROM debian
RUN apt-get update && apt-get install -y nano && apt-get clean
EXPOSE 8484
CMD ["/bin/bash", "/opt/your_app/init.sh"]
There is other way , but before using this look at your requirement,
ENTRYPOINT "put your code here" && /bin/bash
#exemple ENTRYPOINT service nginx start && service ssh start &&/bin/bash "use && to separate your code"
You can also use wait-for-it script. It will wait on the availability of a host and TCP port. It is useful for synchronizing the spin-up of interdependent services and works like a charm with containers. It does not have any external dependencies so you can just run it as an RUN command without doing anything else.
A Dockerfile example based on this thread:
FROM elasticsearch
# Make elasticsearch write data to a folder that is not declared as a volume in elasticsearchs' official dockerfile.
RUN mkdir /data && chown -R elasticsearch:elasticsearch /data && echo 'es.path.data: /data' >> config/elasticsearch.yml && echo 'path.data: /data' >> config/elasticsearch.yml
# Download wait-for-it
ADD https://raw.githubusercontent.com/vishnubob/wait-for-it/e1f115e4ca285c3c24e847c4dd4be955e0ed51c2/wait-for-it.sh /utils/wait-for-it.sh
# Copy the files you may need and your insert script
# Insert data into elasticsearch
RUN /docker-entrypoint.sh elasticsearch -p /tmp/epid & /bin/bash /utils/wait-for-it.sh -t 0 localhost:9200 -- path/to/insert/script.sh; kill $(cat /tmp/epid) && wait $(cat /tmp/epid); exit 0;
I have a Docker Image which uses a Script (/bin/bash /init.sh) as Entrypoint. I would like to execute this script only on the first start of a container. It should be omitted when the containers is restarted or started again after a crash of the docker daemon.
Is there any way to do this with docker itself, or do if have to implement some kind of check in the script?
I had the same issue, here a simple procedure (i.e. workaround) to solve it:
Step 1:
Create a "myStartupScript.sh" script that contains this code:
CONTAINER_ALREADY_STARTED="CONTAINER_ALREADY_STARTED_PLACEHOLDER"
if [ ! -e $CONTAINER_ALREADY_STARTED ]; then
touch $CONTAINER_ALREADY_STARTED
echo "-- First container startup --"
# YOUR_JUST_ONCE_LOGIC_HERE
else
echo "-- Not first container startup --"
fi
Step 2:
Replace the line "# YOUR_JUST_ONCE_LOGIC_HERE" with the code you want to be executed only the first time the container is started
Step 3:
Set the scritpt as entrypoint of your Dockerfile:
ENTRYPOINT ["/myStartupScript.sh"]
In summary, the logic is quite simple, it checks if a specific file is present in the filesystem; if not, it creates it and executes your just-once code. The next time you start your container the file is in the filesystem so the code is not executed.
The entry point for a docker container tells the docker daemon what to run when you want to "run" that specific container. Let's ask the questions "what the container should run when it's started the second time?" or "what the container should run after being rebooted?"
Probably, what you are doing is following the same approach you do with "old-school" provisioning mechanisms. Your script is "installing" the needed scripts and you will run your app as a systemd/upstart service, right? If you are doing that, you should change that into a more "dockerized" definition.
The entry point for that container should be a script that actually launches your app instead of setting things up. Let's say that you need java installed to be able to run your app. So in the dockerfile you set up the base container to install all the things you need like:
FROM alpine:edge
RUN apk --update upgrade && apk add openjdk8-jre-base
RUN mkdir -p /opt/your_app/ && adduser -HD userapp
ADD target/your_app.jar /opt/your_app/your-app.jar
ADD scripts/init.sh /opt/your_app/init.sh
USER userapp
EXPOSE 8081
CMD ["/bin/bash", "/opt/your_app/init.sh"]
Our containers, at the company I work for, before running the actual app in the init.sh script they fetch the configs from consul (instead of providing a mount point and place the configs inside the host or embedded them into the container). So the script will look something like:
#!/bin/bash
echo "Downloading config from consul..."
confd -onetime -backend consul -node $CONSUL_URL -prefix /cfgs/$CONSUL_APP/$CONSUL_ENV_NAME
echo "Launching your-app..."
java -jar /opt/your_app/your-app.jar
One advice I can give you is (in my really short experience working with containers) treat your containers as if they were stateless once they are provisioned (all the commands you run before the entry point).
I had to do this and I ended up doing a docker run -d which just created a detached container and started bash (in the background) followed by a docker exec, that did the necessary initialization. here's an example
docker run -itd --name=myContainer myImage /bin/bash
docker exec -it myContainer /bin/bash -c /init.sh
Now when I restart my container I can just do
docker start myContainer
docker attach myContainer
This may not be ideal but work fine for me.
I wanted to do the same on windows container. It can be achieved using task scheduler on windows. Linux equivalent for task Scheduler is cron. You can use that in your case. To do this edit the dockerfile and add the following line at the end
WORKDIR /app
COPY myTask.ps1 .
RUN schtasks /Create /TN myTask /SC ONSTART /TR "c:\WINDOWS\system32\WindowsPowerShell\v1.0\powershell.exe C:\app\myTask.ps1" /ru SYSTEM
This Creates a task with name myTask runs it ONSTART and the task its self is to execute a powershell script placed at "c:\app\myTask.ps1".
This myTask.ps1 script will do whatever Initialization you need to do on the container startup. Make sure you delete this task once it is executed successfully or else it will run at every startup. To delete it you can use the following command at the end of myTask.ps1 script.
schtasks /Delete /TN myTask /F
I'm using an Alpine flavor from iron.io. I want to auto-run a trivial 'blink' script as a service when the Docker image starts. (I want derivative images that use this as a base to not know/care about this service--it'd just be "inherited" and run.) I was using S6 for this, and that works well, but wanted to see if something already built into Alpine would work out-of-the-box.
My Dockerfile:
FROM iron/scala
ADD blinkin /bin/
ADD blink /etc/init.d/
RUN rc-update add blink default
And my service script:
#!/sbin/openrc-run
command="/bin/blinkin"
depend()
{
need localmount
}
The /bin/blinkin script:
#!/bin/bash
for I in $(seq 1 5);
do
echo "BLINK!"
sleep 1
done
So I build the Docker image and run it. I see no output (BLINK!...) My script is in /bin and I can run it, and that works. My blink script is in /etc/init.d and symlinked to /etc/runlevels/default. So everything looks ok, but it doesn't seem as anything has run.
If I try: 'rc-service blink start' I see no "BLINK!" outbut, but do get this:
* WARNING: blink is already starting
What am I doing wrong?
You may find my dockerfy utility useful starting services, pre-running initialization commands before the primary command starts. See https://github.com/markriggins/dockerfy
For example:
RUN wget https://github.com/markriggins/dockerfy/releases/download/0.2.4/dockerfy-linux-amd64-0.2.4.tar.gz; \
tar -C /usr/local/bin -xvzf dockerfy-linux-amd64-*tar.gz; \
rm dockerfy-linux-amd64-*tar.gz;
ENTRYPOINT dockerfy
COMMAND --start bash -c "while true; do echo BLINK; sleep 1; done" -- \
--reap -- \
nginx
Would run a bash script as a service, echoing BLINK every second, while the primary command nginx runs. If nginx exits, then the BLINK service will automatically be stopped.
As an added benefit, any zombie processes left over by nginx will be automatically cleaned up.
You can also tail log files such as /var/log/nginx/error.log to stderr, edit nginx's configuration prior to startup and much more
I would like to create a dockerfile that builds a Cassandra image with a keyspace and schema already there when the image starts.
In general, how do you create a Dockerfile that will build an image that includes some step(s) that can't really be done until the container is running, at least the first time?
Right now, I have two steps: build the cassandra image from an existing cassandra Dockerfile that maps a volume with the CQL schema files into a temporary directory, and then run docker exec with cqlsh to import the schema after the image has been started as a container.
But that doesn't create an image with the schema - just a container. That container could be saved as an image, but that's cumbersome.
docker run --name $CASSANDRA_NAME -d \
-h $CASSANDRA_NAME \
-v $CASSANDRA_DATA_DIR:/data \
-v $CASSANDRA_DIR/target:/tmp/schema \
tobert/cassandra:2.1.7
then
docker exec $CASSANDRA_NAME cqlsh -f /tmp/schema/create_keyspace.cql
docker exec $CASSANDRA_NAME cqlsh -f /tmp/schema/schema01.cql
# etc
This works, but it makes it impossible to use with tools like Docker compose since linked containers/services will start up too and expect the schema to be in place.
I saw one attempt where the cassandra process as attempted to be started in the background in the Dockerfile during build, then cqlsh run, but I don't think that worked too well.
Ok I had this issue and someone advised me some strategy to deal with:
Start from an existing Cassandra Dockerfile, the official one for example
Remove the ENTRYPOINT stuff
Copy the schema (.cql) file and data (.csv) into the image and put it somewhere, /opt/data for example
create a shell script that will be used as the last command to start Cassandra
a. start cassandra with $CASSANDRA_HOME/bin/cassandra
b. IF there is a $CASSANDRA_HOME/data/data/your_keyspace-xxxx folder and it's not empty, do nothing more
c. Else
1. sleep some time to allow the server to listen on port 9042
2. when port 9042 is listening, execute the .cql script to load csv files
I found this procedure rather cumbersome but there seems to be no other way around. For Cassandra hands-on lab, I found it easier to create a VM image using Vagrant and Ansible.
Make a docker file Dockerfile_CAS:
FROM cassandra:latest
COPY ddl.cql docker-entrypoint-initdb.d/
COPY docker-entrypoint.sh /docker-entrypoint.sh
RUN ls -la *.sh; chmod +x *.sh; ls -la *.sh
ENTRYPOINT ["/docker-entrypoint.sh"]
CMD ["cassandra", "-f"]
edit docker-entrypoint.sh, add
for f in docker-entrypoint-initdb.d/*; do
case "$f" in
*.sh) echo "$0: running $f"; . "$f" ;;
*.cql) echo "$0: running $f" && until cqlsh -f "$f"; do >&2 echo "Cassandra is unavailable - sleeping"; sleep 2; done & ;;
*) echo "$0: ignoring $f" ;;
esac
echo
done
above exec "$#"
docker build -t suraj1287/cassandra -f Dockerfile_CAS .
and rebuild the image...
Another approach used by our team is create schema on server init.
Our java code test if exist the SCHEMA, if not (new environment, new deployment) create it.
Same for every new TABLE, automatic CREATE TABLE creates required new tables for new data entities when they run in any new cluster (other developer local, preproduction, production).
All this code is isolated inside our DataDriver classes for portability, in case we change Cassandra for another DB in some client or project.
This prevent a lot of hassle both for admins and for developers.
This approach is even valid for initial data loading, we use on tests.